On macOS: System Settings > Accessibility > Audio > Play stereo audio as mono. On Windows: Settings > System > Sound > Turn on mono audio. On Linux (GNOME-based distros): Settings > Accessibility > Hearing > Enable Mono Audio. On Android: Settings > Accessibility > Audio/Visual (or Hearing Enhancements) > Enable Mono Audio. Just remember to switch it back to Off after this video.
please lets make a crowfunding to give him money for a better microphone, his videos are really good, he deserves it, thanks for the amazing contribution to the community
Really great demo. Thanks. In this demo application, the model does not have to be multi-modal. Right? img (b64) in AgentState is not used anyway, though it is useful for debugging, etc
I read the example code when I came here I was understanding a little bit the code but once I take a look at its langgraph video here I feel so confused because the pace of the video is so fast
@@code-build-deploy on some Windows machine you can’t run it in Jupyter notebook. I concert the notebook into .py using jupyter function and just had to put the code in a main and it worked
I would like to implement a "Learning Mode" for this WebVoyager Agent. In order to teach this agent an action by recording a manual navigation through the browser and then save it as a "Tool" or a "Succesion of steps". Could you please give me some references or some clues of how can I acchieve this ?
perhaps use RAG for this purpose... so every set of action can be added to a vector database along with its result and before taking any steps the agent can do a quick vector search to see if that action has been done before and the successful series of steps taken
My left ear enjoyed this video very much
LOL I thought my headphones were broken
@@Jakolo121i kept mine for charging 😂
Sorry about that... not sure why!
On mac: system settings > accessibility > audio > play stereo audio as mono. Just remember to switch it back to off after this video
On macOS: System Settings > Accessibility > Audio > Play stereo audio as mono.
On Windows: Settings > System > Sound > Turn on mono audio.
On Linux (GNOME-based distros): Settings > Accessibility > Hearing > Enable Mono Audio.
On Android: Settings > Accessibility > Audio/Visual (or Hearing Enhancements) > Enable Mono Audio.
Just remember to switch it back to Off after this video.
I greatly appreciate the thorough, simple and easy to understand explanations, especially surrounding LangGraph
please lets make a crowfunding to give him money for a better microphone, his videos are really good, he deserves it, thanks for the amazing contribution to the community
Really great demo. Thanks.
In this demo application, the model does not have to be multi-modal. Right? img (b64) in AgentState is not used anyway, though it is useful for debugging, etc
Is there a way to do this using other LMMs such Gemini pro vision or Llava 1.6 ?
I read the example code when I came here I was understanding a little bit the code but once I take a look at its langgraph video here I feel so confused because the pace of the video is so fast
Finally someone can relate
That is so cool that you guy make video about different use cases. Please, improve sound quality and describe topics more detailed.🙂
Can it used to define any url and do kind of functionality testing? Tried changing the url but didn't worked.
We want agent with local
Open source Llm with memory implementation, 😊
Creative and clean! the sound could be improved though. Still great value
How you run this as a python script and not in jupyter notebook? I am getting an error "Event loop is closed", perhaps related to asyncio
Did you got it solved? If so can you help?
@@code-build-deploy on some Windows machine you can’t run it in Jupyter notebook. I concert the notebook into .py using jupyter function and just had to put the code in a main and it worked
can we use llava model here from ollama?
prompt error on the hub
Is anyone else getting prompt must be 'str' error with this code?
This is great, ty!
very interesting idea!
This is very Cool.😃
Did anyone try this with a local model? (Llava for example)
These are good , But looking for JavaScript support
Nice, but it seems to have some glitches that need to be ironed out. Nevertheless, great work!
Phenomenal
Awesome
I would like to implement a "Learning Mode" for this WebVoyager Agent. In order to teach this agent an action by recording a manual navigation through the browser and then save it as a "Tool" or a "Succesion of steps".
Could you please give me some references or some clues of how can I acchieve this ?
If you got the solution, please do share. working on something similar
perhaps use RAG for this purpose... so every set of action can be added to a vector database along with its result and before taking any steps the agent can do a quick vector search to see if that action has been done before and the successful series of steps taken
Awesome project, but he is only speaking to my right ear.
You have your headphones on backward.
haha, microsoft edge