Hybrid Chatbots: How to Chat with Multiple Data Sources (Pinecone, ChatGPT & More)

Поделиться
HTML-код
  • Опубликовано: 12 сен 2024

Комментарии • 57

  • @LiamOttley
    @LiamOttley  Год назад

    Leave your questions below! 😎
    📚 My Free Skool Community: bit.ly/3uRIRB3
    🤝 Work With Me: www.morningside.ai/
    📈 My AI Agency Accelerator: bit.ly/3wxLubP

  • @HamzaRashid
    @HamzaRashid Год назад +7

    Great video! Thanks for making this wonderful content for free. AI Consulting is about to get big 😎

  • @AdamPaulTalks
    @AdamPaulTalks Год назад +1

    This is exactly what I was talking about in our call. A hybrid model.

  • @shacharbard1613
    @shacharbard1613 Год назад +1

    Awesome video @LiamOttley mate!
    i learn so much with each video you put out.
    love the Friday sessions on Discord. can't wait to see what's next 💪💪👏👏

  • @Dr_Tripper
    @Dr_Tripper Год назад

    I am currently writing the scraping function of an agent that leads to a very similar project as yours. Thanks. I am learning programming as I go. I used to have your energy. I wish I had it now in this exciting time to be alive. Thanks.

  • @ChrismAIAutomation
    @ChrismAIAutomation Год назад

    Great video man, cheers from rainy Auckland!

  • @HKallioGoblin
    @HKallioGoblin Год назад +5

    This becomes very interesting when Neuralink provides access to everyones memories.

    • @benjaminfauchald2990
      @benjaminfauchald2990 Год назад

      Oh, you mean in August?

    • @HKallioGoblin
      @HKallioGoblin Год назад

      @@benjaminfauchald2990 Could be, I'm not aware of public schedule in this sense. I know that secret services have been able to do that for over 15 years.

  • @r07r437
    @r07r437 Год назад +1

    could you make a complete friendly beginner tutorial on how to make it step by step pls? That video could be pure Gold

  • @orbitmarketing-usa
    @orbitmarketing-usa Год назад

    Awesome video! Thanks Liam!

  • @binolgeorge
    @binolgeorge Год назад +4

    Keep going

  • @TheCopernicus1
    @TheCopernicus1 Год назад +1

    Awesome work mate!

    • @LiamOttley
      @LiamOttley  Год назад

      Thanks for the support mate 🤟🏽

  • @firetownplatformfinders3996
    @firetownplatformfinders3996 Год назад

    Thanks you for making these amazing video. You are a Legend in my world.

  • @PetrosseZavala
    @PetrosseZavala Год назад

    thanks dude

  • @user-tk2gq8pg7u
    @user-tk2gq8pg7u Год назад +1

    Will it charged to any token fees from openAI, if the knowledgebase is within correct knowledgebase? Thanks

  • @avikshitbanerjee1
    @avikshitbanerjee1 Год назад

    This is great and extremely powerful! Was wondering if there would be anyway to print out the human context "snippets" out along with the llm response?

  • @romanemilianomandujano6175
    @romanemilianomandujano6175 Год назад

    Dude, you are amazing. Congratulations!

    • @LiamOttley
      @LiamOttley  Год назад

      Thank you!! Glad I could help

  • @edwardskcmo
    @edwardskcmo Год назад +1

    Very interested in engaging you build soemthing very similar for me. What's the price range are w looking at? BTW Great stuff! Thanks

    • @LiamOttley
      @LiamOttley  Год назад

      Hi Dennis, fill out the form here and we'd be happy to help: morningside.ai

  • @ruckriver7508
    @ruckriver7508 Год назад

    I am a full-stack developer. What is the best library or project to train AI on private data? I watched the PrivateGPT video, but honestly, PGPT is very slow. 😅

  • @ELDoradoEureka
    @ELDoradoEureka Год назад

    Good jon

  • @zahirabbasmohdhussein4091
    @zahirabbasmohdhussein4091 Год назад

    You are an Amazing person sharing your work with the world. Always stay blessed brother. Is it possible to intergrate with telegram bot?

  • @laurentdemeur689
    @laurentdemeur689 Год назад

    Great video! thank you ! how would you fine tune the function if you need to leverage different data sources to answer one question? by example, my prompt could require to combine elements from ChatGPT + GoogleSearch (weather) + Website (how to contact customer support?) + a vectorDB. "I need advice to dress for a wedding in NewYork next weekend?" > Chat(Fashion style in NYC) + Google (Weather in NYC next week) + Macy website ("to contact support...") + vector DB (containing different suits and shirts products from Macy). thanks for your guidance!

  • @proudindian3697
    @proudindian3697 Год назад

    Even similar functionality exists in the llama index

  • @23SonicBoom
    @23SonicBoom Год назад

    I gotta learn to get this done for a discord bot with my own data. XD

  • @neerajmahapatra5239
    @neerajmahapatra5239 Год назад

    This is one of the very very basic videos. Add some complex stuff like connecting vector store and MySQL database.

  • @ThierryVilaysith
    @ThierryVilaysith Год назад

    What about privacy? No connection to OpenAI servers for multiple files/data sources?

  • @khushpatel6296
    @khushpatel6296 Год назад

    Hello Liam, I am asking here as this is your latest video, I have seen your videos regarding Langchain, I am curious how to optimize inference while using LLMs in Langchain using TensorRT or Onnx Runtime... As in Industries, It's very obvious, to save the time as well as computation cost... In TensorRT while using open source models, we have techniques like quantization and a few more more for that... So In Langchain is there any way to do this?

  • @coolstoryai
    @coolstoryai Год назад +1

    A question for those more knowledgeable than me in this respect. Would it be more effective to train the model on this data rather than feeding it the pdf, or is the effectiveness the same?

    • @Curiumx
      @Curiumx Год назад +2

      Training is more difficult and expensive because it typically requires buying access to $200,000 computers for a few hours at LEAST.
      It’s the difference between making someone read a book about birds to learn all about them and internalize that knowledge vs just giving them a guide about birds to keep in their backpack and they can just use that whenever necessary, much less hassle than spending the time learning and internalizing. This way you can carry around tons of guides more easily and you don’t need to train on all of them you just keep them around as reference easy peasy. Lol sorry for the long winding windy explanation 😂

    • @coolstoryai
      @coolstoryai Год назад

      @Seth Taddiken I appreciate it, I think I heard somewhere there is a cheaper way to train models on a small amount of data now than it used to be. That's why I was wondering if it would make the quality of the results better, more wholesome, and less hallucinations, or if it didn't make a noticeable difference.

    • @Curiumx
      @Curiumx Год назад +2

      @@coolstoryai You can use Low Rank Adaption (LoRA) to train just a subset of the network weights and leave the original network in tact, a LoRA is like a business suit for the model. It can put on a cowboy suit and pretend to be a cowboy but this method isn’t great for injecting knowledge I don’t think, just biasing behavior is my guess. Parsing and organizing documents/books/etc into vectorized databases and doing semantic searches let’s the AI reference the actual source text that you want it to talk about, which makes sense to me for a lot of applications.

    • @coolstoryai
      @coolstoryai Год назад +2

      @Seth Taddiken I see. Thank you for explaining that and breaking it down

    • @Curiumx
      @Curiumx Год назад +1

      @@coolstoryai of course. I’ve heard questions like this a lot, figured I’d practice answering it 😝

  • @dawn_of_Artificial_Intellect
    @dawn_of_Artificial_Intellect Год назад

    How do you set the pinecone endpoint?

  • @vipanchika5059
    @vipanchika5059 Год назад

    Here is l have been seeing a lot of chances of multiple business and benefits is it right

  • @dyce8539
    @dyce8539 Год назад

    Hey Liam i got a question would a hybrid bot or the chatbot with hundreds of files like in the video you mentioned What would be better?

    • @LiamOttley
      @LiamOttley  Год назад

      Depends on if you have distinct data types/sources!

  • @stavroskyriakidis4839
    @stavroskyriakidis4839 Год назад

    Nice

  • @xxasadekx
    @xxasadekx Год назад

    Hi Liam ,I will get in touch with you through Linkedin

  • @MarkoJakovljevic-b9o
    @MarkoJakovljevic-b9o Год назад

    I can't find the Link for this though?

  • @iltodes7319
    @iltodes7319 Год назад

    Please to bReak it down and HOW TO DEAL WITH PRICE FOR EVERY REQUEST, especially for big data and and many users are going to use the same chat from one website?

  • @peroforrr7663
    @peroforrr7663 Год назад

    where is the code ?

  • @michaelortiz1444
    @michaelortiz1444 Год назад

    @LiamOttley hey could you do this one with specific LLMs instead? Like if you ask a coding question you get replits hugginface llm or if you want a longer answer you call on an LLM built for that and if you need to ingest data you call on one that can handle it? BTW you can schedule a call with me for ideas;);)

  • @capecha
    @capecha Год назад

    Any advice on how to integrate PrivateGPT?

    • @LiamOttley
      @LiamOttley  Год назад +2

      Same process can be used to route to local models. Just change the handler functions to use your local models

    • @capecha
      @capecha Год назад +1

      @@LiamOttley thanks a lot man! You are great!

  • @vipanchika5059
    @vipanchika5059 Год назад

    Time is going on but nothing is coming to me as beneficial for my life here is some expectation only remaining in this business

  • @xxasadekx
    @xxasadekx Год назад

    how about the size of the loading, can it go up to 5 GB each or more! let me know if you are aware of that , thanks

    • @LiamOttley
      @LiamOttley  Год назад +1

      Pinecone is probably your best bet for that much data check out my hormoziGPT video