- Видео 21
- Просмотров 98 048
Leon Explains AI
Добавлен 27 май 2021
Welcome to my RUclips channel, where innovation meets application. As an experienced AI Automation Developer, I delve deep into the world of Artificial Intelligence and Large Language Models, uncovering how they can transform both your life and business. From hands-on tutorials to insightful discussions, I aim to demystify AI and make it accessible for everyone. Whether you're an entrepreneur looking to stay ahead of the curve, or simply curious about the tech shaping our future, this channel equips you with the knowledge you need. Subscribe now to become a part of a community that believes in the power of AI to bring about real change. Your journey towards AI mastery starts here.
You can DM me on Twitter or LinkedIn if you want to get in touch. If you're interested in working with me, feel free to contact me via email.
You can DM me on Twitter or LinkedIn if you want to get in touch. If you're interested in working with me, feel free to contact me via email.
Get Your FREE Ollama-Based Multimodal Chat App with PDF, Image & Voice Chat
In this video, I show you how to set up the Ollama based Multimodal Local AI Chat App on Linux and Windows using Docker Compose. This new version introduces integration with Ollama and the OpenAI API, enhancing the performance and capabilities of the Local Multimodal AI Chat system.
Alongside these new additions, the repository continues to support essential features like:
- Local AI Models: Use 100% Privacy Based Local Models
- Image Chat: Interact with images using advanced AI.
- RAG PDF Chat: Chat with PDF documents using Chroma DB.
- Voice Chat: Speech-to-text functionality for smooth voice interactions.
I walk you through the setup process, how to handle errors, and ensuring everything ru...
Alongside these new additions, the repository continues to support essential features like:
- Local AI Models: Use 100% Privacy Based Local Models
- Image Chat: Interact with images using advanced AI.
- RAG PDF Chat: Chat with PDF documents using Chroma DB.
- Voice Chat: Speech-to-text functionality for smooth voice interactions.
I walk you through the setup process, how to handle errors, and ensuring everything ru...
Просмотров: 1 562
Видео
If You Want to Deploy LLM Apps, You NEED to Know This
Просмотров 1 тыс.10 месяцев назад
You will learn about various LLM deployment scenarios tailored for personal use, small businesses, and enterprise settings, highlighting the distinctions and considerations for each. This video delves into the critical choice between local models and paid services, illuminating the benefits and drawbacks of both approaches. Special emphasis is placed on the importance of utilizing GPUs over CPU...
Create your own Local Chatgpt for FREE, Full Guide: PDF, Image, & Audiochat (Langchain, Streamlit)
Просмотров 59 тыс.11 месяцев назад
In this LangChain and Streamlit tutorial, I present a full guide on building your own Local Multimodal AI Chat application using local models. This comprehensive walkthrough demonstrates how you can create an advanced chat interface without relying on external platforms like OpenAI or ChatGPT. We focus on the integration of powerful tools such as Streamlit and Hugging Face, alongside LangChain,...
How to Build Your Custom GPT in 5 Minutes
Просмотров 417Год назад
You will learn how to build custom actions to make API requests to create your custom GPT. We will build an arxiv research assistant together. With this assistant we can get information on the latest research papers. 📚 Resources: - OpenAPI Specification: swagger.io/docs/specification/about/ 🔗 Jump to Sections 00:00 - 00:44 Custom GPTs Introduction 00:45 - 05:43 Custom GPTs Building 05:44 - 05:4...
Three Ways to Load FREE Huggingface LLMs with Langchain
Просмотров 1,6 тыс.Год назад
You will learn three ways how to load huggingface LLM models locally for free with langchain. We will talk about GPU memory usage and how you can tweak a prompt to reduce costs if you're working with OpenAI API. You will also learn about quantization technique, which enables us to load large models which usually would not fit in your memory. 📚 Resources: - Get the Code: github.com/Leon-Sander/t...
Langchain Speech to Text Summary Bot FULL GUIDE
Просмотров 958Год назад
Welcome to this Langchain tutorial where we dive deep into the world of speech-to-text technology! In this comprehensive guide, you'll learn how to transcribe voice messages and summarize them using a Langchain LLM chain. Best of all, we'll integrate everything into a Telegram bot for seamless automation. What You'll Learn: - How to use Langchain and OpenAI whisper API for speech-to-text - Tran...
Building an AI Automation Chatbot for E-commerce
Просмотров 632Год назад
Learn step-by-step how to build a product recommendation and customer support chatbot by leveraging AI in e-commerce. We'll use Langchain VectorIndex as a custom knowledge base, connected via API to the Voiceflow chatbot. Additionally, you'll learn how to integrate with Zapier for a seamless customer experience. 🛠 Tools Covered: - Voiceflow for Chatbot Building - Shopify for Ecommerce Integrati...
Deploy Your Langchain Vectorindex for FREE as an API
Просмотров 1,2 тыс.Год назад
In this tutorial, you'll learn how to create a FastAPI endpoint and deploy it on Render for free. My previous Langchain tutorial covered how to create a Vector Database / VectorIndex using Shopify data. In this video, we'll take the next steps: creating a FastAPI endpoint and deploying it on Render as a web service. This will set the stage for integrating it as a custom knowledge base in Voicef...
Build a Langchain Vector Database with E-Commerce Data
Просмотров 2,2 тыс.Год назад
In this Langchain tutorial, you will learn about data exploration, preprocessing, and creating a Vector Database / VectorIndex. This Vector Database / VectorIndex will serve as our custom knowledge base for a customer support chatbot. We will use Shopify product data to build the Vector Database. This knowledge will assist you in building automations using Langchain as a custom code base, enabl...
Langchain Youtube Summarizer Created and Deployed on AWS in ONE VIDEO
Просмотров 1,1 тыс.Год назад
You will learn how to work with langchain agents and langchain tools to summarize RUclips videos and Online Articles, wrap it inside a streamlit website and deploy everything on AWS EC2. Resource Links: My Code: github.com/Leon-Sander/yt_and_article_summarizer Browserless: www.browserless.io/ AWS: aws.amazon.com/ 0:00 Intro 1:02 Coding Start 2:00 Summary Functions 3:28 RUclips Summary Tool 4:36...
Exploring Streamlit by Making a Langchain Chatapp From Start to Finish
Просмотров 886Год назад
You will learn how to use Streamlit to build chat apps and display tables or charts with just a few lines of code in minutes. Get the code: github.com/Leon-Sander/Streamlit_chatapp Streamlit Website: streamlit.io/ LET'S CONNECT: 📧 Business/Consultation Contact: leonsander.consulting@gmail.com ☕ Buy me a Coffee: www.buymeacoffee.com/leonsanderai SEO: Stremlit tutorial, langchain tutorial, stream...
You Need To Understand Langchain Chain_Type Parameter For Large Document Summarization
Просмотров 1,8 тыс.Год назад
You will learn which chain_type to use, to summarize large documents. I will talk about how they work, the advantages and disadvantages of each chain type. Get the code: github.com/Leon-Sander/Summarize-Large-Docs LET'S CONNECT: 📧 Business/Consultation Contact: leonsander.consulting@gmail.com ☕ Buy me a Coffee: www.buymeacoffee.com/leonsanderai #langchain #openai #aaa #artificialintelligence #a...
Index 50 PDF Books In 5 Minutes With Langchain Vectorindex
Просмотров 24 тыс.Год назад
You will learn about an easy way to work with Vector indeces/Vectorstores, to query all of your pdf files. You will understand the difference to a vector database and learn to implement a telegram chatbot as well. Links Mentioned in the Video: My Code: github.com/Leon-Sander/langchain_faiss_vectorindex Embedding leaderboard: huggingface.co/spaces/mteb/leaderboard Python Telegram Bot: github.com...
Supercharging Langchain: Use Web Search Results for Your Chatbot
Просмотров 430Год назад
We will dive into langchain Agents and Tools to enhance our chatbot with a web search capability. Get my code: github.com/Leon-Sander/langchain_bing_enhanced_chat 0:00 Intro 0:23 Requirements 1:14 Create web request function 2:14 Langchain tools 2:59 Define LLM 3:11 Langchain Agents 4:14 Define Chat loop 4:37 Result 5:57 Bonus: Chat Interface 6:22 Why you should work on projects yourself LET'S ...
AI Automation Agency | Enhance Botpress Chatbot With Bing Search (Without Coding)
Просмотров 219Год назад
Easily enhance your Chatbots with Bing or Google search results. Also learn about chaining AI tasks together without coding yourself. LET'S CONNECT: 📧 Business/Consultation Contact: leonsander.consulting@gmail.com ☕ Buy me a Coffee: www.buymeacoffee.com/leonsanderai
Document Retrieval with Local LLMs for FREE (Search Whole Books)
Просмотров 638Год назад
Document Retrieval with Local LLMs for FREE (Search Whole Books)
Embeddings Basics Finally Explained (Coding Included)
Просмотров 147Год назад
Embeddings Basics Finally Explained (Coding Included)
Could You Hack OpenAI Through Codeinterpreter?
Просмотров 29Год назад
Could You Hack OpenAI Through Codeinterpreter?
Superalignment Team: OpenAI Prepares For Age Of Ultron
Просмотров 42Год назад
Superalignment Team: OpenAI Prepares For Age Of Ultron
Bag of Words explained, the Worst Method for Sentiment Analysis?
Просмотров 130Год назад
Bag of Words explained, the Worst Method for Sentiment Analysis?
Most People Hype ChatGPT but don't even know what NLP is
Просмотров 109Год назад
Most People Hype ChatGPT but don't even know what NLP is
Thankyou so much sir, this is Gold
Sir, Can you mention if I have to install something other than the stuff in models folder?? I don't know why I am getting errors regarding langchain and stuff
To work along the video you need to install the respective versions of the requirements, put them in a requirements.txt file and run "pip install -r requirements.txt" here they are: chromadb==0.4.18 ctransformers==0.2.27 InstructorEmbedding==1.0.1 langchain==0.0.341 librosa==0.10.1 llama_cpp_python==0.2.20 pypdfium2==4.24.0 sentence-transformers==2.2.2 PyYAML==6.0.1 torch==2.1.1 streamlit==1.28.2 streamlit-mic-recorder==0.0.4 transformers==4.35.2
@leonsaiagency I have done sir. Still it's not working.
@@StudyLady27 thats unfortunate
Sir, Can you once checkout my code. Idk what's wrong. But I am keep getting issues with llama-cpp-python eventhough I have tried several things to fix it
Is everything used in the video free??
Sure, just like the title and thumbnail say
Can we deploy this on streamlit with the chromadb and other?
I didnt try that but I guess it is possible
Could you make a step by step Tutorial on how you made this? I have seen your previous videos,but I want to know how you made this using Ollama.
There are people on GitHub marking Ollama apps but I have found no tutorials on the subject. :(
great video i subscribed.. can you give us the modifyable interface in a zip to use it portable usb mode
Sure, you can just get it from github.
@leonsaiagency thanks can you give me the specific link
@@ja23videos You can find it in the description
thanks Leon! really great effort and work to share it! Cheers to you
Is there any possible way to lessen the response time, cause in my case it is taking around 15-20 seconds to respond, My laptop specs are: 16GB DDR5 RAM, RTX 3050 6GB VRAM GPU, i5 13th gen intel processor
Using a better inference engine like ollama, vllm or llama cpp could improve speed. In the newest version of the code I implemented ollama.
@leonsaiagency thank you for your response, I will try doing that
This is one of the worst tutorials ever. It looks like you just took someone else's code and you just copy it
Can we consider this RAG system?
Really nice interface ! hope one day you will do a more beginner friendly tuto...i install ollama, docker but couldnt pass "docker compose up" instruction without the error " no config file provide" 😅.. i will come back in few weeks, thanks anyway
what about the code in your previous video will it work fine now cause i am follwing it and getting alot of errors also the code in that video is not the same as in the repo (that video is 8 months older)
Generally it should work. The code from the older video is provided under another branch in the same repository: github.com/Leon-Sander/Local-Multimodal-AI-Chat/tree/YTVideoCodeVersion The requirements there contain versions which worked at that time, this might reduce errors.
@@leonsaiagency thanks will try again now
@@Danyal_alam also make sure to check github issues, when encountering errors: github.com/Leon-Sander/Local-Multimodal-AI-Chat/issues?q=
thanks dude
Did you manage to implement a responsive voice chat mode? I'm currently using Open WebUI for about everything else and LocalAI is also pretty good. But I haven't really found a good working voice assistent yet. The projects that exist only support English and usually can not handle interruptions properly. Though I haven't checked if that has changed in the past few weeks. ;-) Finally there is also this memory issue. There was MemGPT quite a while ago and there are apps here and there that can write summarizations to a database. But I'm not aware of something that actually saves notes and also retrieves them back in context in a smart way. On the other hand, even ChatGPT is not very good at taking notes.
The underlying voice model is whisper AI, you can specify the version you want to use from huggingface. It generally supports multilanguage input. Also I think interruptions should not be a problem. This kind of memory feature is not supported yet.
@@leonsaiagency Input is the easy part, the problem is proper output and responsivity. The amount of mutlilingual fast TTS models is quite limited. Also you want to listen while talking to respond to "STOP" commands and early responses. I tried to modify a project called June on github and had Claude come up with some algorithm to bascially remove output while the AI is speaking from the incoming audio stream. I wasn't happy with the audio output though and it's all too complex to be fixed in in an hour here and then on weekend which is quite frustrating.
@@testales it seems that you’re talking about an advanced interactive voice mode, this is not supported in this repository. The UI is build with streamlit, which gets rerendered for each UI change. Implementing this interactive advanced mode would be very complicated. Also yeah it’s a lot of effort, this repository took me many weeks with almost daily effort.
That was your video from a year ago that introduced me to the fascinating possibilities of what you can do with today's LLMs and countless tools. Since then, I have created many gadgets to enhance my everyday work. Thank you so much.
Amazing, that's exactly what was intended with the video.
Get the Code here: github.com/Leon-Sander/Local-Multimodal-AI-Chat Let me know if you enjoy the Chat App 🔥 If you want to give some support: buymeacoffee.com/leonsanderai Business/Consultation Contact: leonsander.consulting@gmail.com
Thanks brother you saved my hackathon. Lots of love from india
sometbody know about this error error I am getting in audio handling ->> LibsndfileError: Error opening <_io.BytesIO object at 0x0000021FDE18F6F0>: Format not recognised.
@@CODE787 the newest version of the repository on github contains a fix, if you cloned the repository then make sure to be on the newest version. If you code along the video, you can also look up the solution in the repository in the audio_habdler.py file
i ve got the same error and i don t have no clue to solve it did u solve it ?
i ve got the same error and i don t have no clue to solve it did u solve it ?
@@salmahaouch3813 You need to provide more context information, are you coding along the video and got the error?
Is it free? And the data we stored is publish only local or not?
yes, free and local
one of the best video on youtube man this is god work you are amazing god bless you
Thank you very much, I hope it helps
Did this work for you I am getting module object is not callable
@@mirunkaushik2672 You have to be more concrete, are you coding along the video? Did you get an error at a specific point? Did you just clone the Repo? More contenxt information is needed
Getting this error can you help ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama_cpp_python)
I would suggest running the code by using docker compose, which should not produce any errors. Other than that you can search on github, most errors have a solution explained there, especially in the closed issues: github.com/Leon-Sander/local_multimodal_ai_chat/issues In the next week I am going to update the code and release a new short video explaining how to run everything.
Thank you, this app is very cool.
I don't have a powerful machine, just a 8GB RAM and intel core i5. Will it work properly or not ?
pdf chat would take too long, but rest should work.
@@leonsaiagency I am encountering one more problem, it is regarding some INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token' when trying to chat with pdf ? any idea how i can resolve it ?
@@laughingbrick7906 If you cloned the repository, I would recommend running it by using docker compose
hey, do you think these models can run smoothly without gpu on intel core i5??
@bigRat4335 The normal chatting definitely, but the pdf chat would take a very long time for computation. Audio should be no problem, about image chat I am not sure.
I am getting this error. can someone help me ? 2024-08-20 03:39:18.711 Uncaught app exception Traceback (most recent call last): File "C:\Users\cadmin\AppData\Local\Programs\Python\Python312\Lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 535, in _run_scrip t exec(code, module.__dict__) File "C:\Users\cadmin\Documents\shipcom\multimodal\app.py", line 149, in <module> main() File "C:\Users\cadmin\Documents\shipcom\multimodal\app.py", line 68, in main chat_sessions = ["new_session"] + get_all_chat_history_ids() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\cadmin\Documents\shipcom\multimodal\database_operations.py", line 96, in get_all_chat_history_ids cursor.execute(query) sqlite3.OperationalError: no such table: messages 2024-08-20 03:39:37.955 Uncaught app exception Traceback (most recent call last): File "C:\Users\cadmin\AppData\Local\Programs\Python\Python312\Lib\site-packages\streamlit untime\scriptrunner\script_runner.py", line 535, in _run_scrip t exec(code, module.__dict__) File "C:\Users\cadmin\Documents\shipcom\multimodal\app.py", line 149, in <module> main() File "C:\Users\cadmin\Documents\shipcom\multimodal\app.py", line 68, in main chat_sessions = ["new_session"] + get_all_chat_history_ids() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\cadmin\Documents\shipcom\multimodal\database_operations.py", line 96, in get_all_chat_history_ids cursor.execute(query) sqlite3.OperationalError: no such table: messages
has anyone encountered this error sqlite3.OperationalError: no such table: messages
"Failed to build llama-cpp-python chroma-hnswlib ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python, chroma-hnswlib)" I've been getting this error. Anyone knows how to solve it?
Can you solved even I have that error
Hey where you able to solve this error I am getting the same error?
@@shabbiransari7584 install build tools
Hey bro actually i wanted to use the above same project for a organization.should i use quantized models or a full size ones and what changes will you suggest for a multi user interface i am going to deploy this chatbot on a HPC so there is no any compatibility issue. Please suggest me necessary changes
also add a docker file if you can it will help alot
can somebody help me with this issue:- Traceback (most recent call last): File "d:\AI_chat\local_multimodal_ai_chat\app.py", line 2, in <module> from llm_chains import load_normal_chain, load_pdf_chat_chain File "d:\AI_chat\local_multimodal_ai_chat\llm_chains.py", line 14, in <module> config = load_config() ^^^^^^^^^^^^^ File "d:\AI_chat\local_multimodal_ai_chat\utils.py", line 7, in load_config with open("config.yaml", "r") as f: ^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'config.yaml'
30:20 (save chat history sessions by name) 51:00 (pypdfium is used, maybe try converting to markdown instead b4 embedding; reranking? source citation?) 59:45 (use conversational retrieval chain instead of RetrievalQA)
Do we need to use gpu or is it fine to use cpu
Sir am getting module object is not callable
Does anyone know why there are no new videos on this channel ?
Because I was very busy with projects, these videos take a lot of time and effort to produce.
Thanks for sharing, excellent tutorial. How could I deploy a project using an LLM model like this?
Thanks for the video. Great work.
INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
run pip install sentence-transformers==2.2.2 then restart your kernel.
Hello leon, Right now i am on basic chat implementation part and i am facing some errors.. Like when i am not using ctranformers def create_llm(model_path = config["model_path"]["large"], model_type = config["model_type"], model_config = config["model_config"]): llm = ctransformers(model = model_path, model_type = model_type, config = model_config) return llm then system give me- Typeerror: 'object is not callable error'. when i am using ctransformers def create_llm(model_path = config["ctransformers"]["model_path"]["large"], model_type = config["ctransformers"]["model_type"], model_config = config["ctransformers"]["model_config"]): llm = ctransformers(model = model_path, model_type = model_type, config = model_config) return llm system give me keyerror: 'ctransformers' please help me leon asap
1) Run pip install sentence-transformers==2.2.2 then restart your kernel (the new version 2.2.3 is causing errors). 2) When you specify model path in the config.yaml file, and keep the model_path and model_type name same (name them both 'large' or 'mistral'). If you still get an error, then remove the dot(.) in front of model and try. Like this (config.yaml file content):- --- model_path: mistral: "models/mistral-7b-instruct-v0.2.Q2_K.gguf" model_type: "mistral" model_config: max_new_tokens: 512 temperature: 0 context_length: 4096 gpu_layers: 0 embeddings_path: "BAAI/bge-large-en-v1.5" chat_history_path: "./chat_sessions" ---
till image handling everything was smooth .after that everything messed up..im getting lots of errors
Hello brother i need your help i am getting ImportError: cannot import name 'load_normal_chain' from 'llm_chains' but i have done correct code.. can you please help me i need your help because you have done code till image handling and i am still stuck on basic chat implementation please help me bro
Yes I can@@Saurav-bg7un
@@Saurav-bg7unsry I read the comments now
Hi, could you help me? How do I use openAi key or instead of HuggingFace. Tell me how I can support you in this project, helping me with this interface.
great video, going to ask one thing, why not use huggingface instead? why do we download local models?
lol got my answer from another video of yours: ruclips.net/video/mWdbq3ynie4/видео.html
Hey Leon, just to understand, is the application designed so that we can add in our own data (Such as PDFs) and the responses from the chatbot will be based on the info in the data we initially inputted?
Hello sir, I thank you for all these efforts and the quality of your content. I cloned your project "local_multimodal_ai_chat" on your Github and I downloaded the models as you suggested in the README. I manage to run the application with streamlit and when I want to add a pdf I get this error. TypeError: "INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token' "
run pip install sentence-transformers==2.2.2 then restart your kernel.
Nice work
The way of doing job done is the best way, which I found here.
Thank you very much, i have a question if i want to only work on data from the pdfs folder and not from the model's pretrained history , can you please tell me how i can do it
Bro I'm got many errors while executing it like llm chains retrieval was depecrated, llm chains embeddings and many more can you please solve it for me
Same brother, which OS you on ?
why is my model generating outputs slower than in the video. it is taling like 1 min to generate sometimes even longer. i got a rtx 4050 laptop gpu and a y=r7 7735hs processor and the same model runs faster on lm studio. please help me decrease the response time
i have checked the code in config and changed a lil bit by setting gpu_layers = 1 and still no chage in speed
dude same took 1-2 minute to generate an response having 4050 gpu with 13700hx intel 13 th gen processor but no luck it is pretty slow as compared to video
superheroes = { "Batman": "Bruce Wayne", "Superman": "Clark Kent", "Iron Man": "Tony Stark", "AI_LLM Man": "Leon Sander" }
@leonsaiagency Can chat with pdf display a corresponding image of the problem being asked by the chatbot?