
- Видео 5
- Просмотров 22 585
TechWithRay
Добавлен 19 окт 2022
AI engineer working at Tech Giant, I am excited to share my knowledge and insights with you. Join me as we delve into the fascinating world of artificial intelligence, machine learning, and everything related to cutting-edge technology.
On this channel, you can expect in-depth tutorials, demonstrations, and discussions about various AI topics. From overviewing popular AI packages and frameworks to diving into the latest research papers, I aim to provide valuable content that helps you stay updated and empowered in the fast-paced AI industry.
Make sure to hit the subscribe button so you won't miss any of my upcoming videos.
On this channel, you can expect in-depth tutorials, demonstrations, and discussions about various AI topics. From overviewing popular AI packages and frameworks to diving into the latest research papers, I aim to provide valuable content that helps you stay updated and empowered in the fast-paced AI industry.
Make sure to hit the subscribe button so you won't miss any of my upcoming videos.
Function calling: A game changer to build OpenAI Super App ( Langchain 💀 )
Build OpenAI intelligent Super App with Function calling!!
LangChain will be 💀. You can use this Function Calling to replace Langchain literally.
You can integrate the APIs, e.g. Yelp, Amazon, Google, Weather, Finance and more, to build Super Intelligent App.
Code on Github:
github.com/TechWithRay/OpenAI-Function-Calling
Repo: TechWithRay/OpenAI-Function-Calling
Please give a like or subscribe if you feel this video is helpful. Thank You!!
LangChain will be 💀. You can use this Function Calling to replace Langchain literally.
You can integrate the APIs, e.g. Yelp, Amazon, Google, Weather, Finance and more, to build Super Intelligent App.
Code on Github:
github.com/TechWithRay/OpenAI-Function-Calling
Repo: TechWithRay/OpenAI-Function-Calling
Please give a like or subscribe if you feel this video is helpful. Thank You!!
Просмотров: 2 261
Видео
Orca: Is this LLM better than ChatGPT ?
Просмотров 613Год назад
Latest LLMs model - Orca. If you feel this is helpful, please give a like or subscribe. Thank you for watching. Check the notes on Github: github.com/TechWithRay/LLMs-Papers Please feel free to give me any recommendations! Thanks.
Flowise/Langflow: How to build a no-code PDF Chatbot with langchain.
Просмотров 1,9 тыс.Год назад
ruclips.net/video/ROoGbFrvuwM/видео.html In this video, it shows the flowise example used in above video. Please feel free to check it out. Thank you for watching. Hope you have a nice day! If you like this video, please give a like 👍 or subscribe. A simple example to describe the code behind flowise and langflow Code in this video Github: TechWithRay Repo: demystify_flowise_langflow github.com...
Flowise: How to chat with your PDF using no-code UI Framework
Просмотров 13 тыс.Год назад
[NEW] ruclips.net/video/MERQHJBCq4c/видео.html demonstrate the code and logic behind this Flowise chatbot. Please feel free to check it out. Thank you for watching. Hope you have a nice day. A Tutorial for Flowise UI Flowise is a TS/JS no-code UI tool for LangChain , if you are familiar with JS, please feel free to contribute to this project. github.com/FlowiseAI/Flowise About me: Welcome to my...
Flowise: A Comprehensive Overview and Installation Guidance using Docker
Просмотров 4,5 тыс.Год назад
UPDATE: ruclips.net/video/ROoGbFrvuwM/видео.html How to build Flowise Chatbot. ruclips.net/video/MERQHJBCq4c/видео.html The code and logic behind the Flowise Chatbot. Hi there, Flowise made changes to the Dockerfile under root folder. They used COPY package.json yarn.loc[k] ./ The glob pattern of yarn.loc[k] will avoid previous issue of not found yarn.lock . You can build docker image without u...
Hi, I don't know if thing had change but the interface its different now. Now you need to upsert the pdf file before being able to chat. So I tried using the default model from hugging face but got a problem about dimension, model was 768 but Pinecone needed 1024. I found a model for 1024 (intfloat/multilingual-e5-large) but got a "fetch failed" error. I looked for the same PDF you using on your video and still got the same error. But then I got a smaller one and it worked. So you know if it could be becaue I'm using free version of pinecone and doesn't allow me big files? or that i'm hosting flowise in hugginface? thanks for the video by the way
Do you think i can setup flowise to read a folder locally for csvs looking for a name and then using those csvs or pdfs i guess, to talk about data. Instead of the built in download for csv agent.
Great video, thanks
How do you add a prompt to the LLM chain so it can answer from documents?
After replicating your structure I am getting 0 in the response of all my prompts Where am i going wrong? Please explain
How can i use Chat Prompt Template on ChatOpenAI with memory?
Not sure how this is a comprehernsive overview when it only covers installation? I think you need to at least open the app to call it an overview....
Stick to Chinese
Best yet! Thank you! If the pdf has a structured table, will the split and encode destroy easy extraction of meaningful relationships in a query?
Thanks for sharing!
Thanks for watching!!
Thanks, TwR!!
Thanks for watching!!
What are these highlights you are using?
Hi I was using the highlights from Adobe PDF reviewer.
nice tutorial on openAI function calling.. thanks for the informative video
Thank you for watch!! Hope you have a nice day!
Just a few tips. start with an outline. step one and then follow it so others can follow along and don't get confused. You don't want to be jumping around when you are teaching something.
Thank you for the helpful tips!! I will do a better job. 😊
Awesome!!! Insta subscribed! Thank you very much for sharing, I will contribute with you.
Thank you for watching!!!
Great content. question! I tried it and it seems answers are restricted to the content of the doc and can’t go out of context. Is there any way around that so it can respond like the normal ChatGPT and also refer to the doc?
Yes, you are right!! It cannot go beyond the content you have. Cuz the purpose of this is to find content in you PDF, not from ChatGPT. If you really want to go beyond your PDF, one solution: you can wrap your search function to Function Calling from ChatGPT endpoint, you can get some ideas from ruclips.net/video/gtbj4AgOkTo/видео.html. Hope this is helpful for you.
perfect for searching Turing..can i search other words besides Turing?
Yes, you can search another words, it will give you other related content. You can check this video, it shows more example ruclips.net/video/MERQHJBCq4c/видео.html.
Thanks. Hopefully the can release the model soon.
Thanks for watching!!
Thanks!!
Thank you for watching!
Thank you for the great content! Can you please tell me how can I use the free LLMs without OpenAI? Thank you.
Thank you so much for watching. I will record another video how to use LLMs from Huggingface to replace OpenAI. Stay tuned. Thanks.
@@techwithray8943 Id prefer to be able to use oobabooga/textgen as my local-api, but id settle for anything that lets me run a local flowise pipeline utilizing a wizrdlm gptq model (preferably wizardLM-uncensored-falcon-7b-gptq, but again, ill take anyhting that works) .... PLEASE GOD HELP! xD
Thank you!! This is very informative. Please make more contents like this.
Thank you so much for your support!!! Will keep working on good content!
It‘s great .If I want flowise to partly use the information in the langchain database and partly use the real-time information of the serp, how can I achieve this?
Thank you so much for watch! Do you mean you want to search info from your database and serp, then send all the relevant content to LLMs (OpenAI) ?
@@techwithray8943 This is almost the meaning. There is a lot of knowledge in openai that is outdated or incorrect. It needs to be re-aligned with the local database. If you only use serp to search, there will also be an error message. In specific problems, use a dedicated database first, and then use LLM would be a better choice
@@liuchuchenliu2802 ***I SPENT $9 DOLLARS IN OPENAI API CALLS BUILDING A CUSTOM DB YESTERDAY- DOING IT THIS WAY THROUGH FLOWISE IS NOT OPTIMUM*** add "conversation summery memory" to have persistant memory between queries, in-memory venctor store, vector store retreiver, hugging face embeddings, local file system fais vector retreiver, and fais vector store to write your db to local disk... i think thats, use text file and text splitter modules if you want to give it guidance on the stored vector emmbeddings, and have a realy long convo or prompt chain that stays on topic of what you would like it to "learn" (add vector store retrieve prompt template, and tell it its obsessed with a certain topic to laser-focus on specific data)... when you want to implement it , use fais retrever, , a context memory store of some kind, and conversational QA chain, vector db QA chain...
Timestamp 1. Build a basic Chatbot: ruclips.net/video/ROoGbFrvuwM/видео.html 2. Build a Chatbot to Chat with PDF: ruclips.net/video/ROoGbFrvuwM/видео.html
Very helpful. Thank you, please keep posting!!!
how to combine Prompt and PDF
Hi thanks for watching. You can use LLM Chain to directly use prompt, or use Conversational Retrieval QA chain to add "inputs" along with embedings and send to OpenAI API.
Круто, 13:00 жду с нетерпением :)
great! keep sharing with us your knowledge and skills
Thank you so much.
Have you successfully debugged with azure openai api? I tried several times, it was fine with openai but not successful with azure
Hi, I did not use azure openai api. It should be simple to use it. Let me try it first and get back to you. Thanks.
@@techwithray8943 Thank you, but I just succeeded after changing a set of APIs. Obviously, there is a certain difference between using azure and native openai to generate content😄
@@liuchuchenliu2802 I am so glad that you figured out. I checked the azure oai services, the authentication is a bit different and the gpt version might be different from what we choose from openai.
Hi there, Flowise made changes to the Dockerfile under root folder. They used COPY package.json yarn.loc[k] ./ The glob patter of yarn.loc[k] will avoid previous issue, . You can build docker image without update the Dockerfile.
Two questions 1 - Why do you prefer Hugging Face Inference for Embeddings? 2 - Do not need to put the model for embedding?
Hi Thank you for your question. Q1: Hugging Face Embedding is free, while OpenAI Embedding will charge fee based on the number of tokens. If we embedding PDF files with few hundreds of pages, the fee might be out of expectation. Q2: We do not need to put the model for embedding, as there is DEFAULT_MODEL_NAME. You can check Flowise source code here github.com/FlowiseAI/Flowise/blob/ff93d11913e00ef2daa390757a2a0ed14485c234/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/HuggingFaceInferenceEmbedding.ts#L3, it use the HuggingFaceInferenceEmbeddings, which has the DEFAULT_MODEL_NAME, please refer here python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html (python code and LangchainJS should be the same logic).
@@techwithray8943 Thank you
Thank you, we need more videos like this to understand the possibilities
Thank you so much for watching. I am new to RUclips and still improving my video quality.
Hello, 🎉 great content! I would like to request a video on a solution for chatting on PDFs using an open-source model from Hugging Face that utilizes GPU and ensures data privacy.
Thanks for watching. I will work on this request and record another video.
good AI
Thank you!!
Hi there, I am new to RUclips, and still improving my video editing skills.