Hi Aaron - I have been an embedded software developer (medical devices) for decades. I want to say thank you for this tutorial. Really appreciate the pace and the content. Thanks for not wasting time on fluff. Following you now.
I am going to be looking into function calling from different LLMs. I have been playing around with the so-called "computer use" workflow from Anthropic. It's intriguing but also runs into rate limits quickly. I am most interested in practical applications of using function calling from LLMs with the hope of automating various repetitive tasks I find myself needing / wanting to do fairly often.
Thanks Aaron this is ace. took me a few goes to figure out how to lpad my api key but figured it out - it's quite magical when you get it going! Thanks again!
Yes, definitely! Gemini 1.5 is multimodal and can take images, audio and video in the prompt with text. I am working on a video on Gemini for vision / images right now and will let you know when it’s posted.
To run your chatbot on Google Cloud Run, you'll need to containerize your application using Docker, then deploy it. This involves creating a Dockerfile to package your code and dependencies, pushing the image to Google Container Registry, and deploying it via the Google Cloud Console. You can follow Google's Cloud Run quickstart guide (cloud.google.com/run/docs/quickstarts ) for detailed steps. If there's interest, I could create a detailed tutorial video on this process in the future.
@@aarondunn-zt7ev that would be great if you could make such a tutorial. I have tried to put it on cloud run but my understanding is lacking. I keep getting a "service Unavailable" message on my run url. Im also not sure if I need to get a secret key etc. (my program runs fine from the console, but fails from the cloud run url). Anyway, great tutorials and I have subscribed :)
There are 2 approaches you can take. One is to just put all your data inside the prompt. This is possible nowadays even with big datasets as models now have huge context windows (up to 2 million tokens for Gemini 1.5 Pro). However, adding too much data to the prompt can be costly and even result in lower quality answers. The other approach is RAG (Retrieval Augmented Generation) which extracts smaller chunks of your dataset that are similar to your input query / question and then uses that data to produce an answer. This is much more efficient and when done properly may even result in better outputs. With either method, you would include in the prompt an instruction to only consider the data you provided when generating the response. This doesn’t work 100% of the time but overall it’s pretty reliable. I put out a video a little while back that demonstrated RAG for a chatbot and how to restrict the model to the data you provide. Check out the video here: ruclips.net/video/cpm28hEvGAA/видео.htmlsi=qAX6wM0ytI_VSu7v
You would select whatever project you want your Gemini code associated with. If you don’t have any Google Cloud projects then you should be able to select a Default option. Let me know if this doesn’t work and I’ll take a closer look.
I am a French developer, and I would like to know, first, if it is possible to make the ChatBot speak in French and, secondly, if it is possible to get the remaining tokens. Otherwise, thank you very much, this video really helped me.
Actually, the pretty clear tutorial I've ever seen neat explanation, But I have a doubt What should we do when we prompted the model with pdf? can you make a video for that? and also integrated with UI design?
In the model give a system instruction like, (system instruction= "Give response only from the file uploaded and not from outside the pdf or csv file provided" ) . This will limit it to answer only from the knowledge base given, feel free to customise your system instructions.
There are 2 approaches you can take. One is to just put all your data inside the prompt. This is possible nowadays even with big datasets as models now have huge context windows (up to 2 million tokens for Gemini 1.5 Pro). However, adding too much data to the prompt can be costly and even result in lower quality answers. The other approach is RAG (Retrieval Augmented Generation) which extracts smaller chunks of your dataset that are similar to your input query / question and then uses that data to produce an answer. This is much more efficient and when done properly may even result in better outputs. With either method, you would include in the prompt an instruction to only consider the data you provided when generating the response. This doesn’t work 100% of the time but overall it’s pretty reliable. I put out a video a little while back that demonstrated RAG for a chatbot and how to restrict the model to the data you provide. Check out the video here: ruclips.net/video/cpm28hEvGAA/видео.htmlsi=qAX6wM0ytI_VSu7v
I don’t have any experience creating Facebook apps, but I can look to do some more research on it. From what I’ve discovered so far, to add a Gemini-powered chatbot to your Facebook page, you'll need to integrate it using Facebook Messenger's API. This involves setting up a Facebook Developer account, creating a Facebook app, and configuring a webhook to handle messages. You'd then connect the Gemini API to process and respond to these messages. It would be an interesting project and video!
Hi aaron, please contnue the video on how to set it out the chat bot on web site. Also please using additional the pdf file or csv file to add as for additional knowlwdge about say the list of book that say , you want to sell ..hoe to set it out?
Hi Aaron - I have been an embedded software developer (medical devices) for decades. I want to say thank you for this tutorial. Really appreciate the pace and the content. Thanks for not wasting time on fluff. Following you now.
Thank you. I really appreciate your comment! Let me know if there is any specific content that you would like to see.
I am going to be looking into function calling from different LLMs. I have been playing around with the so-called "computer use" workflow from Anthropic. It's intriguing but also runs into rate limits quickly.
I am most interested in practical applications of using function calling from LLMs with the hope of automating various repetitive tasks I find myself needing / wanting to do fairly often.
bro, u nailed it. for past 3 days i'm trying to build a chatbot for myself. Finally i saw this video ❤🔥
I was struck in Saving the chat history. It helped me sir... thanks a lot...you just got a new subscriber ✨😌
Thanks Aaron this is ace. took me a few goes to figure out how to lpad my api key but figured it out - it's quite magical when you get it going! Thanks again!
The most awesome, concise, neat, clean and up-to-date content on how to leverage Gemini API
Thank you so much for your comment! I’m happy you found the video useful.
As always, very clear explanation and demo! Really enjoyed your informative lecture, greatly appreciated 👍👍
Thank you! I’m happy you found the video useful. There will be many more coming.
Thank you so much for the video!! I don't know how to code and I learned how to create inputs thanks to you!
That’s great! Learning to code is challenging but very rewarding. I hope you will find my future videos helpful as well 😊
your content is so well delivered thank you so much
Thank you. I appreciate it!
Nice video, crisp and concise. Thanks and you've got a new subscriber.
Thank you. I’m glad you enjoyed the video!
Great tutorial
Thankyou so much for this video. Please do make more. Really simple and applicable.
Super easy to grasp..
Great tutorial! It was clear and easily reproducible. Thank you!
Thanks you for the comment! I’m glad the video was helpful. I plan to make more so let me know what you would like to see.
tq sir it was good help to bulid my mini project for college
how am i suppose to access the history?
You don't
Can you make this gemini able to recognize images, create titles and tags and subjects (metadata), thanks
Yes, definitely! Gemini 1.5 is multimodal and can take images, audio and video in the prompt with text. I am working on a video on Gemini for vision / images right now and will let you know when it’s posted.
@@aarondunn-zt7ev okay, ty
I just uploaded my video on Gemini for vision. Check it out here: ruclips.net/video/XcMcNBZawAU/видео.htmlsi=6p93NAlB6o9DePnz
WOW great video thanks alot!
Nice great and wonderful
how can you change this to run as a google Cloud Run?
To run your chatbot on Google Cloud Run, you'll need to containerize your application using Docker, then deploy it. This involves creating a Dockerfile to package your code and dependencies, pushing the image to Google Container Registry, and deploying it via the Google Cloud Console. You can follow Google's Cloud Run quickstart guide (cloud.google.com/run/docs/quickstarts ) for detailed steps. If there's interest, I could create a detailed tutorial video on this process in the future.
@@aarondunn-zt7ev that would be great if you could make such a tutorial. I have tried to put it on cloud run but my understanding is lacking. I keep getting a "service Unavailable" message on my run url. Im also not sure if I need to get a secret key etc. (my program runs fine from the console, but fails from the cloud run url). Anyway, great tutorials and I have subscribed :)
error : No module named 'google'
but i downloaded all the requirements. can i know the reason?
How can we feed it our customised data so that it behaves based on the data and response accordingly
There are 2 approaches you can take. One is to just put all your data inside the prompt. This is possible nowadays even with big datasets as models now have huge context windows (up to 2 million tokens for Gemini 1.5 Pro). However, adding too much data to the prompt can be costly and even result in lower quality answers.
The other approach is RAG (Retrieval Augmented Generation) which extracts smaller chunks of your dataset that are similar to your input query / question and then uses that data to produce an answer. This is much more efficient and when done properly may even result in better outputs.
With either method, you would include in the prompt an instruction to only consider the data you provided when generating the response. This doesn’t work 100% of the time but overall it’s pretty reliable.
I put out a video a little while back that demonstrated RAG for a chatbot and how to restrict the model to the data you provide. Check out the video here: ruclips.net/video/cpm28hEvGAA/видео.htmlsi=qAX6wM0ytI_VSu7v
What google cloud project for API key
You would select whatever project you want your Gemini code associated with. If you don’t have any Google Cloud projects then you should be able to select a Default option. Let me know if this doesn’t work and I’ll take a closer look.
I choose the generative language client and there is a key error every time
I am a French developer, and I would like to know, first, if it is possible to make the ChatBot speak in French and, secondly, if it is possible to get the remaining tokens. Otherwise, thank you very much, this video really helped me.
I believe it knows French it just needs to have inputs in French
Actually, the pretty clear tutorial I've ever seen neat explanation, But I have a doubt What should we do when we prompted the model with pdf? can you make a video for that? and also integrated with UI design?
idk about pdf input but for UI you have to learn front-end development separately.
great video , thanks for this video
Thanks. Happy to hear it was useful.
How many requests can you make for each key?
1500 request as of now
@@harshrana3012 so its free to use and i can get another key if i exceed 1500?
Hi can we create a customized data and it should answer only for customized data not should answer all the data
In the model give a system instruction like, (system instruction= "Give response only from the file uploaded and not from outside the pdf or csv file provided" ) . This will limit it to answer only from the knowledge base given, feel free to customise your system instructions.
There are 2 approaches you can take. One is to just put all your data inside the prompt. This is possible nowadays even with big datasets as models now have huge context windows (up to 2 million tokens for Gemini 1.5 Pro). However, adding too much data to the prompt can be costly and even result in lower quality answers.
The other approach is RAG (Retrieval Augmented Generation) which extracts smaller chunks of your dataset that are similar to your input query / question and then uses that data to produce an answer. This is much more efficient and when done properly may even result in better outputs.
With either method, you would include in the prompt an instruction to only consider the data you provided when generating the response. This doesn’t work 100% of the time but overall it’s pretty reliable.
I put out a video a little while back that demonstrated RAG for a chatbot and how to restrict the model to the data you provide. Check out the video here: ruclips.net/video/cpm28hEvGAA/видео.htmlsi=qAX6wM0ytI_VSu7v
You are great man cheers
Thanks! I appreciate the comment. Let me know what other content you would like to see.
@@aarondunn-zt7ev It will be great if you use GEMNI API and made a chatbot, which will use custom datasheet from me.
Yo can anyone tell me how to get api placeholder url or what thai is
Thanks a lot!
You’re welcome! Let me know if there is any specific content that you would be interested in seeing.
How do I add germini with my Facebook page?
I don’t have any experience creating Facebook apps, but I can look to do some more research on it. From what I’ve discovered so far, to add a Gemini-powered chatbot to your Facebook page, you'll need to integrate it using Facebook Messenger's API. This involves setting up a Facebook Developer account, creating a Facebook app, and configuring a webhook to handle messages. You'd then connect the Gemini API to process and respond to these messages. It would be an interesting project and video!
Is there something wrong wit Audio ?
Nah that was my Laptop.
Hi aaron, please contnue the video on how to set it out the chat bot on web site. Also please using additional the pdf file or csv file to add as for additional knowlwdge about say the list of book that say , you want to sell ..hoe to set it out?