Web Scraping AI AGENT, that absolutely works 😍

Поделиться
HTML-код
  • Опубликовано: 6 ноя 2024

Комментарии • 102

  • @jbo8540
    @jbo8540 6 месяцев назад +3

    If your LLM gives you an article you can't find, my first assumption is that it made it up. While this is an interesting use case, it's going to likely take very precise prompt engineering to not get hallucinated outputs.

    • @1littlecoder
      @1littlecoder  6 месяцев назад +3

      No, it's my bad. After the video I reviewed the web page. In fact, I added the screenshot in the video. It was inside the carousel

  • @Raphy_Afk
    @Raphy_Afk 6 месяцев назад +3

    Amazing ! If my PSU wasn’t dead I wouldn’t be sleeping for days

  • @bastabey2652
    @bastabey2652 5 месяцев назад +1

    this ScrapegraphAI tool is the most interesting scraping tool I've tested so far

    • @De-e-kay
      @De-e-kay 4 месяца назад

      I am not having success with it. It only gives me urls, titles, related posts. No content that I ask for.

  • @unclemike2008
    @unclemike2008 6 месяцев назад +5

    "poor" Love you brother! Right there with you. Great video. Been trying and failing to get a scraper with java support. Cheers!

  • @alx8439
    @alx8439 6 месяцев назад +11

    Next time it will also need a visual model to solve capchas because website administrators will be protecting their precious content from scraping :)

  • @marcoaerlic2576
    @marcoaerlic2576 5 месяцев назад +1

    Really great video, thank you. I would be interested in seeing more videos about ScrapeGraphAI.

  • @HeberLopez
    @HeberLopez 6 месяцев назад +1

    I find this live example pretty useful for general purpose, I can think of multiple ways I could use this for one off PoCs

  • @patrickwasp
    @patrickwasp 6 месяцев назад +9

    It’s a spider, not an octopus. Spiders crawl on webs.

    • @opusdei1151
      @opusdei1151 6 месяцев назад

      What is an octopus? Which crawls API's or do datamining

  • @alqods80
    @alqods80 6 месяцев назад +1

    There is a playwright function that bypasses the irrelevant resources so the scraping becomes faster

  • @madhudson1
    @madhudson1 5 месяцев назад

    It depends on the llm used and questions you pose it. It can often not generate json and the library isnt best suited for iteration through a collection of sites

  • @kalilinux8682
    @kalilinux8682 6 месяцев назад +1

    Could you please do more videos on this. Like trying to use it on more educational content with equations used using mathjax and katex

  • @Balajik7-qh1pq
    @Balajik7-qh1pq 6 месяцев назад +1

    I like all your videos , keep rocking bro

  • @inplainview1
    @inplainview1 6 месяцев назад +3

    Watching this before youtube gets upset again. 😉

    • @1littlecoder
      @1littlecoder  6 месяцев назад +2

      Honestly, I was actually scared before uploading this, but let's see!

    • @inplainview1
      @inplainview1 6 месяцев назад +1

      @1littlecoder Hopefully all is well.

  • @ayyanarjayabalan
    @ayyanarjayabalan 6 месяцев назад

    Awesome we need more practical session with code like this.

  • @manojy1015
    @manojy1015 6 месяцев назад

    We need more tutorials of practical live examples of llm especially rag and fine tuning

  • @edgarl.mardal8256
    @edgarl.mardal8256 5 месяцев назад

    you are the best indian youtuber I have soon to this date.

  • @TUSHARGOPALKA-nj7jx
    @TUSHARGOPALKA-nj7jx 8 дней назад +1

    Very useful!

  • @liamlarsen9286
    @liamlarsen9286 6 месяцев назад

    thanks for the heads up at 6:00 .
    worked when using that version only

  • @meetscreationz5591
    @meetscreationz5591 4 месяца назад

    Hi, Could you please elaborate on setting base_url port number? also, where did you check olama information? kindly guide. TIA

  • @NaveenChouhan-mm5gz
    @NaveenChouhan-mm5gz 5 месяцев назад +1

    I tried to install the scrapegraphai but I'm getting stuck in the yahoo search dependency which breaks the execution and return attribute error.

    • @Ashort12345
      @Ashort12345 5 месяцев назад

      it is the same error or not here: I'm very beginer level if someone know how to fix mine please leave the comment
      ---------------------------------------------------------------------------
      AttributeError Traceback (most recent call last)
      Cell In[25], line 17
      3 graph_config = {
      4 "llm": {
      5 "model": "ollama/mistral",
      (...)
      13 }
      14 }
      16 # Instantiate the SmartScraperGraph class
      ---> 17 smart_scraper_graph = SmartScraperGraph(
      18 prompt="List me all the articles",
      19 source="news.ycombinator.com",
      20 config=graph_config
      21 )
      23 # Run the smart scraper graph
      24 result = smart_scraper_graph.run()
      File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\scrapegraphai\graphs\smart_scraper_graph.py:47, in SmartScraperGraph.__init__(self, prompt, source, config)
      46 def __init__(self, prompt: str, source: str, config: dict):
      ---> 47 super().__init__(prompt, config, source)
      49 self.input_key = "url" if source.startswith("http") else "local_dir"
      File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\scrapegraphai\graphs\abstract_graph.py:49, in AbstractGraph.__init__(self, prompt, config, source)
      47 self.config = config
      ...
      --> 227 params = self.llm_model._lc_kwargs
      228 # remove streaming and temperature
      229 params.pop("streaming", None)
      AttributeError: 'Ollama' object has no attribute '_lc_kwargs'
      Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

  • @TailorJohnson-l5y
    @TailorJohnson-l5y 5 месяцев назад

    Great video! Thank you!

  • @moonwhisperer4804
    @moonwhisperer4804 5 месяцев назад

    Only if this tool has a way to automatically know how to go through different paginated pages and go into each detail page to extract data

  • @ngoduyvu
    @ngoduyvu 5 месяцев назад

    thanks for the tutorial, please make more tutorial for this ScrapeGraphAI, can you make one for scraping the website that has antibot or credential (require login)

  • @Ari_Alur
    @Ari_Alur 6 месяцев назад +1

    Would it be possible to explain the whole thing to someone who has nothing to do with programming? I was able to install everything but I can't do anything with the code from github...
    Would be great :) Thanks for the video! Very interesting but unfortunately not feasible for me.
    (I'm on Linux)

    • @1littlecoder
      @1littlecoder  6 месяцев назад +1

      Do you want me to show how to run the code from GitHub? Will it be helpful

    • @Ari_Alur
      @Ari_Alur 6 месяцев назад

      Yeah! At least in a way that's easier to understand. I don't know anything about code, so I need things to be clear and simple.

    • @Ari_Alur
      @Ari_Alur 6 месяцев назад

      Thanks!:)

  • @monuaimat5228
    @monuaimat5228 6 месяцев назад +1

    RAG: Ritual Augmented Generation 😂

    • @J3R3MI6
      @J3R3MI6 6 месяцев назад +1

      🕯️🕷️🕯️

  • @BiXmaTube
    @BiXmaTube 6 месяцев назад

    Need proper pdf parsing ai that I can run on a cloud server without gpu. Extracting text, tables and images and arranging it in a db based on a prompt that puts each data in the right table. That will be amazing if you can find something like that.

  • @DhruvPatel-vl1tj
    @DhruvPatel-vl1tj 4 месяца назад

    There is a problem i am encountering for many websites i am getting empty response from the library i have tried many solutions that were listed in their official documentation like proxy rotation , using different models etc .... also the output that it gives for any website also takes like minimum of 2-3 minutes pls help me solve the problem

  • @morease
    @morease 5 месяцев назад

    I fail to see why rag is needed when the library can simply be asked to identify the html path/element that contains the content, and then extract the html from that with cheerio

  • @LeeBrenton
    @LeeBrenton 6 месяцев назад

    scrape Facebook please! - I need to do the most boring thing for work, I tried to program a scrapper but FB makes it very hard, I was only partially successful (expecially grabbing the post date). This method looks very exciting :)

    • @webhosting7062
      @webhosting7062 6 месяцев назад

      What was ur requirements?

    • @LeeBrenton
      @LeeBrenton 6 месяцев назад

      @@webhosting7062 I write a daily report, based on the new posts in various FB groups .. but FB doesn't put posts in the correct order (also, pinned posts up the top will be old posts) .. so i need to check the date, but, FB obfuscates the date like a MF .. i wasn't able to figure it out with selenium.
      so, requirements are .. 'get the latest (less than ~24hr old posts) from a FB group.

  • @Kevinsmithns
    @Kevinsmithns 4 месяца назад

    Have you used vapi to automatically do cold calls

  • @CM-zl2jw
    @CM-zl2jw 5 месяцев назад

    🤣 I enjoy your sense of humor. Thank you. You are RICH in kindness and intelligence. That’s almost as good as money…. Money only buys limited amounts of happiness.
    Your videos are very helpful and informative. I’ll pay you to help me figure a couple things out. What’s your contact?

    • @1littlecoder
      @1littlecoder  5 месяцев назад

      Thank you 1littlecoder@gmail.com is my email

  • @mihirprakash6009
    @mihirprakash6009 4 месяца назад

    Hi, can it scrape from the web in general? Like not a particular website

  • @EobardUchihaThawne
    @EobardUchihaThawne 6 месяцев назад

    Ok, now that's a good useage of ai model

  • @shobhanaayodya7024
    @shobhanaayodya7024 6 месяцев назад

    That logo is a spider 🕸️🕷️

  • @einekleineente1
    @einekleineente1 5 месяцев назад

    It would have been nice if you would have shown to install Ollama locally first.

    • @1littlecoder
      @1littlecoder  5 месяцев назад

      I'm sorry I had done it a few times before so didn't repeat ruclips.net/video/C0GmAmyhVxM/видео.html

    • @einekleineente1
      @einekleineente1 5 месяцев назад +1

      @@1littlecoder cool. Thank you 👍🏻

  • @aionair77
    @aionair77 6 месяцев назад +1

    BTW, that's a spider in the logo. It's a spider that lives in the World Wide Web 😅

    • @1littlecoder
      @1littlecoder  6 месяцев назад

      How did I not even think about it?😭😭😭

    • @aionair77
      @aionair77 6 месяцев назад

      @@1littlecoder :)

  • @darkreader01
    @darkreader01 4 месяца назад

    if we want to scrape from websites that need authentication, how can we do that? Is there any way to login first or any option to use cookies?

  • @planplay5921
    @planplay5921 6 месяцев назад

    it still have the risk of being blocked, it's just a way of parsing

  • @ramanaraj7
    @ramanaraj7 4 месяца назад

    can we use Gemini API to do the same?

  • @jarad4621
    @jarad4621 5 месяцев назад

    Is the llm there to convert the raw html to structured data? Then it saves to rag and you can query the data with another llm to analyse? I need to scrape homepages from 10k sites tostructured data into rag db to ask The sites questions, can it be setup todo many sites like an automated agent, or can it be used as a tool or function call in an agent framework like crew ai? that video would be cool

  • @Anesu-nv1mh
    @Anesu-nv1mh 17 часов назад

    can it scrape photos and videos also and get it downloaded ??

  • @adriangpuiu
    @adriangpuiu 6 месяцев назад

    another question , what if we only want to scrape and not emmbed anything ?

    • @1littlecoder
      @1littlecoder  6 месяцев назад

      I think in those cases you can probably use a conventional libraries I guess but that's a good question there are different classes within this library that might let it do

    • @adriangpuiu
      @adriangpuiu 6 месяцев назад

      @@1littlecoder
      from scrapegraphai.graphs import BaseGraph
      from scrapegraphai.nodes import FetchNode, ParseNode,generate_answer_node
      graph = BaseGraph(
      nodes={
      fetch_node,
      parse_node,
      },
      edges={
      (fetch_node, parse_node),
      (parse_node, generate_answer_node),
      },
      entry_point=fetch_node
      ) .. i dont have time to try it now cause im at work :))

  • @tauquirahmed1879
    @tauquirahmed1879 6 месяцев назад +1

    great video....

  • @AI-Wire
    @AI-Wire 6 месяцев назад

    So, this is impossible to run in Colab? I like to automate many of my tasks using Github actions.

    • @1littlecoder
      @1littlecoder  6 месяцев назад

      You can run on colab. But you'd need openai keys

  • @MadhavJoshi-m8m
    @MadhavJoshi-m8m 6 месяцев назад

    Only is own-lee
    Not one-lee
    Btw great video

  • @honneon
    @honneon 6 месяцев назад

    i luv it❤

  • @prasannaprakash892
    @prasannaprakash892 6 месяцев назад

    This is great, thanks for sharing, Can you share your python version as i am getting an error when running the same code

  • @oliverli9630
    @oliverli9630 6 месяцев назад

    wondering when somebody will integrate `undetected-chrome` to it.

  • @IdPreferNot1
    @IdPreferNot1 6 месяцев назад

    What am i missing.... error running the async cell?

  • @jmirodg7094
    @jmirodg7094 5 месяцев назад

    thanks! 👍

  • @Macorelppa
    @Macorelppa 6 месяцев назад +1

    🥇

  • @user-nm2wc1tt9u
    @user-nm2wc1tt9u 5 месяцев назад

    does it work on google colab?

  • @DM-py7pj
    @DM-py7pj 6 месяцев назад

    looks something like spider (scrape/crawl) + bone (GET/fetch) + document | parse ( HTML) ???

  • @viddeshk8020
    @viddeshk8020 6 месяцев назад

    I don't understand that for web scrapping why do I have to install so much of other dependencies like ollama etc. I mean it is just a simple webscraping why make the thinks complex? Still for the complex task a complex prompt needs to be given.

    • @liamlarsen9286
      @liamlarsen9286 6 месяцев назад

      ollama is just a frmework to run LLMs locally, so it downloads the model insted of using an API and connecting to server

    • @madhudson1
      @madhudson1 5 месяцев назад

      If you just want scraping, don't bother with this.
      However, if you want scraping + RAG, with LLM integration, then use this. But it's not without it's issues

  • @CryptoMaN_Rahul
    @CryptoMaN_Rahul 2 месяца назад

    Wanted to do it using misyral apt

  • @yashsrivastava677
    @yashsrivastava677 6 месяцев назад

    Will it work to scrape linkedIn jobs?

  • @kushagrakapoor9181
    @kushagrakapoor9181 4 месяца назад

    hey man im getting not implemented error

  • @adriangpuiu
    @adriangpuiu 6 месяцев назад

    can it do heavy JavaScript sites ? :))

    • @1littlecoder
      @1littlecoder  6 месяцев назад

      I've not tried it! it'd be a good opportunity to try that, especially given it uses Playwright!

    • @adriangpuiu
      @adriangpuiu 6 месяцев назад

      @@1littlecoder ill tell ya, i tried and it fails miserably :)) , if you have better luck let us know man

    • @1littlecoder
      @1littlecoder  6 месяцев назад

      @@adriangpuiu ah that's bad. Which website was it ?

    • @adriangpuiu
      @adriangpuiu 6 месяцев назад

      @@1littlecoder the user replyes are incapsulated in a JS response from what i noticed, maybe they have an api or soething , i was just unable to figure it out . YET ...

    • @adriangpuiu
      @adriangpuiu 6 месяцев назад

      @@1littlecoder its the appian discussion forum

  • @Naniirowadesuka
    @Naniirowadesuka 5 месяцев назад

    reddit being called front page of internet is like... no please

  • @rahuldinesh2840
    @rahuldinesh2840 6 месяцев назад

    I think Chrome extensions are best.

  • @webhosting7062
    @webhosting7062 6 месяцев назад

    What about site build with jquery.. Does it works for that too?

    • @1littlecoder
      @1littlecoder  6 месяцев назад +1

      I have not tried it . Someone else in the comments said it might not very good.

  • @Balajik7-qh1pq
    @Balajik7-qh1pq 6 месяцев назад

    I like all your videos , keep rocking bro