I follow a lot of youtube channels for AI. But I must say and I am sure most of your subscribers will agree that your contents are in another level. Thanks for everything. Can't wait to see your future videos on conversating chatbots (Agent simulation) to produce a remarkable things.
totally agree the issue with this is how to put it in a colab for people to play with it. I always try to release some code so people can try it out themselves. I need to look at it again.
@@samwitteveenai My thought was that perhaps for some of your vids, you have a little segment that translates the usage to langflow so that we can see how to begin to string graphical nodes together to get the same kind of outcomes that you're showing with your code bits? Thanks again for making all this more accessible for the 'common man' :)
Helpful video. I did a zapier chain with the langchain example where i set up find email and send email. It would send the email but i was asking for summaries of stoicism and it wasn’t creating a summary in the email. Then when i had find and send email zapier calls it seemed to be calling the send email without the find email first but i could never get it to fill in the email properly. Not sure how to debug these natural language bugs.
I would be really interested in using models like alpaca 7b that you can self host for this kind of application and see if the performance of these 'small' models would actually suffice. 🤔
I actually tested it on Gpt4all 7B 4bit, vicuna 7B 4 but (llama based) models, frankly it was not correctly working, i tested it on the python_repl by parsing a CSV file and asking it to find the rows of the CSV, but unfortunately it didn't pass the correct python code to do that, but it correctly identified that it should use the pandas library and dataframe method. It was looping again and again for ever with each time giving a wrong response. I think we should do further testing and i am sure that larger model like the vicuna 13B or the Openassistant 30B model will work correctly(i was not able to test the bigger model as am resource limited)
Getting this error when trying to query something from duckduckgo search-error code : 60 , 'SSL certificate problem: unable to get local issuer certificate' Any way to solve this? Please help me
Is there a hidden prefix for the prompt that tells the LLM how to use tools in general? That presumably is not native knowledge and is a Langchain feature.
Hi! Is there any way to make the wikipediaAPIwrapper print the output on another language(french, spanish, etc...)???? And for the DuckDuckGo tool, is there any way to set the country for searching information??? For example, if I live in Germany and I ask the engine to tell me when Germany's next football match will take place, I would like the engine to tell me Germany's time and not USA's.
For some reason, when I run `!pipenv install langchain` in my jupyter notebook, I keep getting version 0.0.131 . Any idea why its not installing the latest by default?
I am not sure I understand what you are trying to do if you want to run your langchain app as an API that can be called via http etc you could use FastAPI or Flask and then expose a port.
Hi, Great tutorial! There are some changes in the latest LangChain library. Warning:```LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead. How to create_react_agent instead of calling initialize_agent()
Do langchain agents have the ability to access vector databases for custom knowledge that the user set (like the pdf example you had in another video)? If so, which tool would that be in langchain?
Does it use OpenAI LLM to reason which tool to use given a task? Not sure which line of code actually does this reasoning. I guess it is inside the zeroshot function. It is done behind the scene.
I would love to make a tool that speeds up image search. Basically it would be great to type someones name or a city ect and immediately get images of the prompt.
hi Sam! Good tutorial. I have a question- let's say i'm going custom to use the python tool to answer queries out of a pandas dataframe. How can I do this? Should there be two tools, one which generates python commands and one PythonREPL tool? I'm actually looking into custom tool since the langchain's pandas agent does NOT seem to be supporting 'memory' so it can't really be used in a chatbot as it needs to be reminded of the context again and again.
Have you taken a look at the cataclysm library? Also would love your thoughts on my chatsnack module, more for ez ChatGPT usage-- not as advanced as langchain, though.
@@samwitteveenai could you elaborate a bit on how the data looks like that you are using for fine-tuning? So how do the instructions look like? Are you using open-source models to generate your training data? Have you used models like Dolly or OpenAssistant together with a sample seed to generate datasets? Could you make a video on that if so? I have tried Dolly extensively together with the approach from the Alpaca model (175 sample questions -> then generate more instructions based on those samples) and it failed miserably.
Hi sam, is this still valid & useful after one year. Or are there better tools around.another thing i dont get it clearly is whats the difference with crewai or autogen for ins.
Hi there, I have recently started getting into langchain & your video is immensely helpful. But I had received an error on Windows machine when I installed Langchain & imported it. Error was pexpect has no method called spawn. I've put up a solution video for it on my channel. I hope the video will be helpful for everyone. Thanks a ton for your video though. Langchain Pexpect Error Solution video: ruclips.net/video/hCJyITK1iis/видео.html
Do you think we can get around the chat confusion of tool use, I’ve also seen this issue in my experiments, by being more concise and provide examples within the tool description? That’s again for all your hard work and love the content!🥳🦾
I follow a lot of youtube channels for AI. But I must say and I am sure most of your subscribers will agree that your contents are in another level. Thanks for everything. Can't wait to see your future videos on conversating chatbots (Agent simulation) to produce a remarkable things.
Thank you for your time! Langflow might make an interesting video topic :)
totally agree the issue with this is how to put it in a colab for people to play with it. I always try to release some code so people can try it out themselves. I need to look at it again.
@@samwitteveenai My thought was that perhaps for some of your vids, you have a little segment that translates the usage to langflow so that we can see how to begin to string graphical nodes together to get the same kind of outcomes that you're showing with your code bits? Thanks again for making all this more accessible for the 'common man' :)
Helpful video. I did a zapier chain with the langchain example where i set up find email and send email. It would send the email but i was asking for summaries of stoicism and it wasn’t creating a summary in the email. Then when i had find and send email zapier calls it seemed to be calling the send email without the find email first but i could never get it to fill in the email properly. Not sure how to debug these natural language bugs.
I would be really interested in using models like alpaca 7b that you can self host for this kind of application and see if the performance of these 'small' models would actually suffice. 🤔
LangChain works with several LLMs, including llama derivatives.
Yes, use Vicunia 7B, or would you wait for red pajamas?
@@jawadmansoor6064, I would say, play with several of them before you start training the model with your custom data.
I actually tested it on Gpt4all 7B 4bit, vicuna 7B 4 but (llama based) models, frankly it was not correctly working, i tested it on the python_repl by parsing a CSV file and asking it to find the rows of the CSV, but unfortunately it didn't pass the correct python code to do that, but it correctly identified that it should use the pandas library and dataframe method. It was looping again and again for ever with each time giving a wrong response. I think we should do further testing and i am sure that larger model like the vicuna 13B or the Openassistant 30B model will work correctly(i was not able to test the bigger model as am resource limited)
@@randomNunber thank you for tests and reporting results
Hi,
I amm getting outputparser exception for the above code
Hello, there is a missing : from langchain.agents import Tool in block 16
Good stuff! Excellent video! 👍
Getting this error when trying to query something from duckduckgo search-error code : 60 , 'SSL certificate problem: unable to get local issuer certificate'
Any way to solve this? Please help me
Is there a hidden prefix for the prompt that tells the LLM how to use tools in general? That presumably is not native knowledge and is a Langchain feature.
The actual prompt is shown at 7:27
Hi! Is there any way to make the wikipediaAPIwrapper print the output on another language(french, spanish, etc...)???? And for the DuckDuckGo tool, is there any way to set the country for searching information??? For example, if I live in Germany and I ask the engine to tell me when Germany's next football match will take place, I would like the engine to tell me Germany's time and not USA's.
For some reason, when I run `!pipenv install langchain` in my jupyter notebook, I keep getting version 0.0.131 . Any idea why its not installing the latest by default?
try pip install --upgrade langchain
Can I have langchanin running on a notebook on my computer and I put a prompt like via api and get the response?
yes if you code it up to do that.
@@samwitteveenaiThanks, Witteveen would you happen to know which commands I need for the API ?
I am not sure I understand what you are trying to do if you want to run your langchain app as an API that can be called via http etc you could use FastAPI or Flask and then expose a port.
@@samwitteveenai yes that's exactly it, I will use FastAPI thank you.
`from langchain.agents import initialize_agent, Tool`
HERO
from langchain.agents import Tool was missing from the notebook..
since openai key is not free can you estimate how much you used for this video
less than 2 cents
for this video it would be low , in cents (probably 5-10cents max)
Hi,
Great tutorial! There are some changes in the latest LangChain library.
Warning:```LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.
How to create_react_agent instead of calling initialize_agent()
Check out my 1st video on LangGraph it has some of these there
By the way, you got any full fledged course for these langchain & using LLMs, apart from this playlist on youtube?
Do langchain agents have the ability to access vector databases for custom knowledge that the user set (like the pdf example you had in another video)? If so, which tool would that be in langchain?
Yes you can do that in the same way as the PDF vid, but use an external vector store like Pinecone etc
very informative; Thank you for sharing these amazing stuffs
Does it use OpenAI LLM to reason which tool to use given a task? Not sure which line of code actually does this reasoning. I guess it is inside the zeroshot function. It is done behind the scene.
Yes the OpenAI model is what is deciding via text generation which tool it should use. the LangChain filters for that in the background.
I would love to make a tool that speeds up image search. Basically it would be great to type someones name or a city ect and immediately get images of the prompt.
Why would the prompts not use ChatOpenAI? Isn't it 10 times cheaper?
check out the video after this I did it with turbo and explained the issues and differences.
This doesn't seem to be working very well. I I keep getting errors.
Yeah they changed the API for this the day after I released the video. I will update the code over the next few days.
Top video! really perfect. thanks
hi Sam! Good tutorial. I have a question- let's say i'm going custom to use the python tool to answer queries out of a pandas dataframe. How can I do this? Should there be two tools, one which generates python commands and one PythonREPL tool? I'm actually looking into custom tool since the langchain's pandas agent does NOT seem to be supporting 'memory' so it can't really be used in a chatbot as it needs to be reminded of the context again and again.
Thank you, Sam! I am using LangChain in JavaScript. I couldn't find the DuckDuckGo tool available there. Is there any alternative tool?
There is a the paid GSERP tool for google searches. You could also write one reasonably easily by modifying something like DuckDuckScrape
Have you taken a look at the cataclysm library? Also would love your thoughts on my chatsnack module, more for ez ChatGPT usage-- not as advanced as langchain, though.
Chatsnacks looks interesting will check it out.
Incredible Content!!!
LangChain is AGI or a step towards it.
:D not quite but it's pretty cool.
Hi, I just found "Tool" was not defined in your notebook!
I just updated the the notebook now. Thanks to you and others who pointed this out.
Have you tried open source models with this as well?
almost all don't work on these base prompts unfortunately.
@@samwitteveenai thank's for the feedback. Are there potential data sets that could be used for fine tuning for these prompting tasks?
@@adriangabriel3219 not that are public AFAIK, I am looking at releasing a model for this.
@@samwitteveenai could you elaborate a bit on how the data looks like that you are using for fine-tuning? So how do the instructions look like? Are you using open-source models to generate your training data? Have you used models like Dolly or OpenAssistant together with a sample seed to generate datasets? Could you make a video on that if so? I have tried Dolly extensively together with the approach from the Alpaca model (175 sample questions -> then generate more instructions based on those samples) and it failed miserably.
Hi sam, is this still valid & useful after one year. Or are there better tools around.another thing i dont get it clearly is whats the difference with crewai or autogen for ins.
i've tested it and getting errors for specfying the tools. probably some updates with the libraries : //
Great video - thank you:)
does openai key cost alot
i disagree, ran it 2 days straight around $15.00 usd
for this video it would be low , in cents (probably 5-10cents max)
You are prolific!
Why does Google API even cost money? Since Google search is free anyway.
its actually a 3rd party that runs the API
Thanks. 👍
It gets gloopy glibd
Hi there, I have recently started getting into langchain & your video is immensely helpful. But I had received an error on Windows machine when I installed Langchain & imported it. Error was pexpect has no method called spawn. I've put up a solution video for it on my channel. I hope the video will be helpful for everyone. Thanks a ton for your video though.
Langchain Pexpect Error Solution video: ruclips.net/video/hCJyITK1iis/видео.html
very good videos bro, how can I contact you, I think we can do something great.
best to reach out on Linkedin
Do you think we can get around the chat confusion of tool use, I’ve also seen this issue in my experiments, by being more concise and provide examples within the tool description? That’s again for all your hard work and love the content!🥳🦾