WebVoyager

Поделиться
HTML-код
  • Опубликовано: 24 янв 2025

Комментарии • 38

  • @ricand5498
    @ricand5498 11 месяцев назад +85

    My left ear enjoyed this video very much

    • @Jakolo121
      @Jakolo121 11 месяцев назад +9

      LOL I thought my headphones were broken

    • @dineshkumarkinjangi8994
      @dineshkumarkinjangi8994 11 месяцев назад

      @@Jakolo121i kept mine for charging 😂

    • @LangChain
      @LangChain  11 месяцев назад +9

      Sorry about that... not sure why!

    • @AtulR
      @AtulR 6 месяцев назад +2

      On mac: system settings > accessibility > audio > play stereo audio as mono. Just remember to switch it back to off after this video

  • @dLightsGG
    @dLightsGG 8 дней назад +1

    On macOS: System Settings > Accessibility > Audio > Play stereo audio as mono.
    On Windows: Settings > System > Sound > Turn on mono audio.
    On Linux (GNOME-based distros): Settings > Accessibility > Hearing > Enable Mono Audio.
    On Android: Settings > Accessibility > Audio/Visual (or Hearing Enhancements) > Enable Mono Audio.
    Just remember to switch it back to Off after this video.

  • @anonymous6666
    @anonymous6666 11 месяцев назад +5

    I greatly appreciate the thorough, simple and easy to understand explanations, especially surrounding LangGraph

  • @Username56291
    @Username56291 8 месяцев назад +3

    please lets make a crowfunding to give him money for a better microphone, his videos are really good, he deserves it, thanks for the amazing contribution to the community

  • @tianrenw
    @tianrenw 20 дней назад

    Really great demo. Thanks.
    In this demo application, the model does not have to be multi-modal. Right? img (b64) in AgentState is not used anyway, though it is useful for debugging, etc

  • @ajinkya81194
    @ajinkya81194 11 месяцев назад +3

    Is there a way to do this using other LMMs such Gemini pro vision or Llava 1.6 ?

  • @free_thinker4958
    @free_thinker4958 6 месяцев назад +2

    I read the example code when I came here I was understanding a little bit the code but once I take a look at its langgraph video here I feel so confused because the pace of the video is so fast

  • @andrushka324
    @andrushka324 10 месяцев назад

    That is so cool that you guy make video about different use cases. Please, improve sound quality and describe topics more detailed.🙂

  • @mayanklohani19
    @mayanklohani19 11 месяцев назад

    Can it used to define any url and do kind of functionality testing? Tried changing the url but didn't worked.

  • @aifarmerokay
    @aifarmerokay 11 месяцев назад +2

    We want agent with local
    Open source Llm with memory implementation, 😊

  • @intelpakistan
    @intelpakistan 11 месяцев назад

    Creative and clean! the sound could be improved though. Still great value

  • @antwierasmus
    @antwierasmus 10 месяцев назад

    How you run this as a python script and not in jupyter notebook? I am getting an error "Event loop is closed", perhaps related to asyncio

    • @code-build-deploy
      @code-build-deploy 8 месяцев назад

      Did you got it solved? If so can you help?

    • @stephbrn4107
      @stephbrn4107 2 месяца назад

      @@code-build-deploy on some Windows machine you can’t run it in Jupyter notebook. I concert the notebook into .py using jupyter function and just had to put the code in a main and it worked

  • @mayanklohani19
    @mayanklohani19 11 месяцев назад

    can we use llava model here from ollama?

  • @zenofthepup1530
    @zenofthepup1530 11 месяцев назад +1

    prompt error on the hub

  • @KushJuvekar-j3f
    @KushJuvekar-j3f 4 месяца назад

    Is anyone else getting prompt must be 'str' error with this code?

  • @gitmaxd
    @gitmaxd 11 месяцев назад +1

    This is great, ty!

  • @aiexplainai2
    @aiexplainai2 11 месяцев назад

    very interesting idea!

  • @metamarketing3402
    @metamarketing3402 11 месяцев назад

    This is very Cool.😃

  • @avisimkin1719
    @avisimkin1719 8 месяцев назад

    Did anyone try this with a local model? (Llava for example)

  • @VivekGautam-o8v
    @VivekGautam-o8v 11 месяцев назад

    These are good , But looking for JavaScript support

  • @TristanvanDoorn
    @TristanvanDoorn 11 месяцев назад

    Nice, but it seems to have some glitches that need to be ironed out. Nevertheless, great work!

  • @build.aiagents
    @build.aiagents 11 месяцев назад

    Phenomenal

  • @2107mann
    @2107mann 11 месяцев назад

    Awesome

  • @DanielGonzalez-wr7fz
    @DanielGonzalez-wr7fz 11 месяцев назад +1

    I would like to implement a "Learning Mode" for this WebVoyager Agent. In order to teach this agent an action by recording a manual navigation through the browser and then save it as a "Tool" or a "Succesion of steps".
    Could you please give me some references or some clues of how can I acchieve this ?

    • @piyushsinha5545
      @piyushsinha5545 8 месяцев назад

      If you got the solution, please do share. working on something similar

    • @ayeshaimran
      @ayeshaimran 7 месяцев назад

      perhaps use RAG for this purpose... so every set of action can be added to a vector database along with its result and before taking any steps the agent can do a quick vector search to see if that action has been done before and the successful series of steps taken

  • @cuties4698
    @cuties4698 5 месяцев назад

    Awesome project, but he is only speaking to my right ear.

    • @Cynosureepr
      @Cynosureepr 3 месяца назад +1

      You have your headphones on backward.

  • @bhakti214
    @bhakti214 Месяц назад

    haha, microsoft edge