It's interesting how much performance gain you got from clean markdown data like firecrawl, sometimes you dont need much stronger reasoning, you just need to give agent better tools
You talked about 'universal scrapers' then you used a bunch of expensive services to create a very vanilla hyper-specific scraper that doesn't' require LLMs at all.... hmm....
It's just stupid, it's all about them using these services and putting the affiliate link, then finding true budget friendly alternatives. I can build the same with public API of a llm service, I will take hours but at the end, never again I will need to waste my time, you can even make the llm find names of classes and ids you want to scrape them the llm create the code, and run it automaticly.
I made some video on my LinkedIn on building in public something similar a web scraper that summarize website and make outreach message based on that, everything free as chrome extension u just need a good to computer to run lllama locally
Everything he did can be done free with the use of python libraries he showed. He also explained the issue with scraping very throughly and accurately and then demonstrated the solution quite clearly. And then explained the use of agents LLMs in this context. I really don't understand what you think you just watched.
I am already doing this. Its the same way I trained models to play video games - take a screensshot, convert to greyscale, but instead of inserting that into a CNN, I pipe it into an agent that I built and it has mouse and keyboard tools instead of the typical selenium/headless tools. It works pretty damn good although some models will refuse cpatchas outright.
dear jason, i am really amateur with coding so i don't have a clue on so many topics that i try to execute. i have come across some of your interesting videos while trying to achieve but failed miserably on most of em. but today i just came for the thumbnail and rolling my sleeves to implement this masterpice. thank you so much & peace from 🇹🇷
Is bounding box method open-source ? Looking for a function that returns an X,Y coordinate of an element. Def FindCoordinates(instruction, screenshot) Return (x coordinate, y coordonate)
I don't believe it's possible to create a universal scraping solution that would be efficient in many edge cases. A custom solution would likely be faster and cheaper, especially if you need to scale. I've evaluated a lot of scraping SaaS services and used everything from Selenium to headless browsers. There are so many protection mechanisms, including headers, API checks, cookies, etc., and I'm sure I haven't seen a fraction of them. Some sites even require the browser to load JS and render changes on screen. With AI, we can get closer to an ideal solution. For example, you could take a screenshot if necessary (if the data is graphic and not part of the HTML source) and at the same time scrape the HTML. Then, pass them together to an LLM with your question. The structured data should then answer what you need it to become. However, you need to run the LLM yourself. Any solution using an LLM should allow users to provide an extraction schema, which needs to be very flexible as a prompt. This could be a nice service for hobbyists, but for scale, it would be too expensive. A custom implementation would probably serve better.
I agree it is not easy to build an universal one that works for every website - one path im exploring now is to build good scraper for specific website category; e.g. one scraper for all ecommerce, one scraper for all company websites, one scraper for all blogs, etc. Then you have something to route to the right scraper;
@@AIJasonZ Agree with the assessment. In our tests, GPT-4 is still a bottleneck, no matter how good the tools and clean the data you give it, for a Universal scrapping / web automation task it often fails to provide a correct next best action to take, goes into loops, performs redundant actions, does not abort / complete execution etc. If you build your agent around a specific workflow where you predefine the sequence of steps to take - that's a different story. But that approach is far from universal.
I never knew web scraping was so hard. I mean, I ve been trying to scrape together a decent Instagram following for years, but I guess that's not what they mean by web scraping. Anyway, who knew websites were like the cool kids at school, only loading their content when you scroll into their 'cool zone' and making you jump through hoops to get to the good stuff
Not if you run llama on olama on your own server or local machine, which is doable. Hopefully soon this cost goes further down with services we can't host.
I mean in comparison to other more specialised ML models currently used in industry, where hundreds of millions, to billions, of requests are being made where cost per request really matters. What LLMs like this CAN give you is speed to data which is great for a subset projects, from any site while eliminating the need to write selectors and extraction code, but at the expense of high cost per request. But again we have ML that can deliver that at scale at a fraction of the cost, and at a much higher accuracy. In a world where simply adding a headless browser to access HTML can 30x the cost per request and kill a project. Adding a LLM is simply a no go. I’m excited to see the future of LLMs in scraping, but it’s VERY early days but I haven’t seen usecase where LLMs are used for extracting and structuring the data are significantly faster or cheaper better than the existing tech. Where I have seen LLMs provide practical utility is in the post extraction process where it can be used effectively to extract data from unstructured text which as item descriptions. I’m excited for the future of LLMs when they become practical and the when the benefits can outweigh the cost in real world applications, but for now I view them as interesting research projects pushing things forward, and as fun tools for smaller personal projects where budgets are not an issue. I love these kinds of discussions, and last year I attended and spoke at extract summit in Ireland, I hope to be going again this year to hear more about the latest AI use cases. To wrap up, I think the best use of LLMs I’ve seen is to generate xpaths and to use those inside cheap to run spiders/crawlers. And I’m looking forward to seeing what people come up with next.
Hi, Im building a PDF QA chatbot than answers from 10 long pdfs, Ive experimented with RAG but the chunks I get from the vector db often dont provide the correct context, what can I do to get reliable answers based on my pdfs? will passing the entirety of the pdfs to an llm with a large max tokens help? it doesnt seem effecient to pass the entirety of the pdfs with every question ask.... Im lost please help
Great work!! I'm currently tackling web scraping challenges, especially with certain sites where determining the delivery location or dealing with pop-ups obstructing the content poses issues. This often requires user action before the search query can proceed. What do you believe are the most effective methods or tools to overcome these hurdles? Sometimes, even the agentql struggle to resolve these issues.
We are in a world where data is the most sought after commodity. And AI is going to make accessing information trivial. I wonder how Big Business will respond. I suspect they'll start pushing for laws to criminalize web scraping in the not too distant future. It will be interesting to see how this all plays out in the years to come.
They would never win with that kind of law. If you show data publicly it's there for the picking. If A.I. can have vision and mimic a human user, it's game over for hiding data.
And all the trolls come out.. never created a piece of value in their lives for anyone else for free, but they rag on content producers who dedicate a lot of time to bring value to others. Thumbs up guys keep your value less contributions coming, you’re really heroes here.
You didn't name the title of the speech, the names of the authors or team, got to give credit where it's due... can we get a link to the videos your using? the source? i would like to see the whole thing
Hi Jason, thanks for your interesting video. Would it be possible to place your microphon so that we can see your lips when you are talking. For me it's easier to understand english, if I can see them. You huge mic covers so much of your face. Thanks.
Your content is fantastic, your English is top-notch but your accent adds some overhead to understanding. I hope that doesn't feel insulting, your vocabulary and grammar is better than most native English speakers. So an idea... Could you look at using gpt-4o to improve elecution (not just English) in a foreign language? It would be quite useful for many people.
If you are interested in universal web scraper i'm building, please leave your email in this waiting list: forms.gle/8xaWBBfR9EL5w8jr6
Can you pass credentials along with the scraper in a secure manner? For example I want to scrape NYTimes but with my NYTimes account.
@@teegees I don't think that's probable because of privacy and security
i need to bypass cloudflare etc.
@@ozxbt I have a solution for this reply and we could get in touch
It's interesting how much performance gain you got from clean markdown data like firecrawl, sometimes you dont need much stronger reasoning, you just need to give agent better tools
All you need are tools
@@nigeldogg Love it!
You talked about 'universal scrapers' then you used a bunch of expensive services to create a very vanilla hyper-specific scraper that doesn't' require LLMs at all.... hmm....
It's just stupid, it's all about them using these services and putting the affiliate link, then finding true budget friendly alternatives. I can build the same with public API of a llm service, I will take hours but at the end, never again I will need to waste my time, you can even make the llm find names of classes and ids you want to scrape them the llm create the code, and run it automaticly.
Yeah i believe this commenter and I just unsubbesd. What’s with the web scraping grift lately?
9:43 lol
I made some video on my LinkedIn on building in public something similar a web scraper that summarize website and make outreach message based on that, everything free as chrome extension u just need a good to computer to run lllama locally
Everything he did can be done free with the use of python libraries he showed. He also explained the issue with scraping very throughly and accurately and then demonstrated the solution quite clearly. And then explained the use of agents LLMs in this context. I really don't understand what you think you just watched.
Really Love his accent and voice, very soothing and clear
I am already doing this. Its the same way I trained models to play video games - take a screensshot, convert to greyscale, but instead of inserting that into a CNN, I pipe it into an agent that I built and it has mouse and keyboard tools instead of the typical selenium/headless tools. It works pretty damn good although some models will refuse cpatchas outright.
Captchas? Maybe one of those “uncensored” llms
How does your mouse know where to click?
@@Plash14gridmap, I assume.
How do you bypass anti bot tech? Does the agent click/interact with the game with some degree of randomness?
Hi mark can I contact you,got some doubts 😊
Gonna try out the 2 examples soon, and please please launch the universal web scraping agent, i will pay you for that in a heartbeat!
Pls Jason AI
- Jason Finance
10:42 i follow tutorial, build scraper with cleanmymac, nothing happen, install twice, Ubuntu 22.04 only get many index.html
Holy shit, that universal ecommerce scraping agent in the end is sick, thanks for sharing that framework!!
dear jason, i am really amateur with coding so i don't have a clue on so many topics that i try to execute. i have come across some of your interesting videos while trying to achieve but failed miserably on most of em. but today i just came for the thumbnail and rolling my sleeves to implement this masterpice. thank you so much & peace from 🇹🇷
You are freaking amazing.❤❤
I am recently thinking about this idea too. Many thanks for sharing your result!!
perplexity should use this crawler since their models are hallucinating reference URLs LOL
can it be used for scrapping estate agents for finding a house to buy?
Sure after big reworking the code and in semiautomatic mode
With all these expensive tools, I think it will best to build with playwright.
Though it will take weeks or months, but it will be cost effective.
Issue with just Playwright it will be detected as a bot.
@@helix8847 You know any better alternative?
Is bounding box method open-source ?
Looking for a function that returns an X,Y coordinate of an element.
Def FindCoordinates(instruction, screenshot)
Return (x coordinate, y coordonate)
I wonder if this is an Advertisement video or a knowledge sharing video..Nothing is open source.
I don't believe it's possible to create a universal scraping solution that would be efficient in many edge cases. A custom solution would likely be faster and cheaper, especially if you need to scale.
I've evaluated a lot of scraping SaaS services and used everything from Selenium to headless browsers. There are so many protection mechanisms, including headers, API checks, cookies, etc., and I'm sure I haven't seen a fraction of them. Some sites even require the browser to load JS and render changes on screen.
With AI, we can get closer to an ideal solution. For example, you could take a screenshot if necessary (if the data is graphic and not part of the HTML source) and at the same time scrape the HTML. Then, pass them together to an LLM with your question. The structured data should then answer what you need it to become.
However, you need to run the LLM yourself. Any solution using an LLM should allow users to provide an extraction schema, which needs to be very flexible as a prompt. This could be a nice service for hobbyists, but for scale, it would be too expensive. A custom implementation would probably serve better.
I agree it is not easy to build an universal one that works for every website - one path im exploring now is to build good scraper for specific website category; e.g. one scraper for all ecommerce, one scraper for all company websites, one scraper for all blogs, etc. Then you have something to route to the right scraper;
@@AIJasonZ Agree with the assessment. In our tests, GPT-4 is still a bottleneck, no matter how good the tools and clean the data you give it, for a Universal scrapping / web automation task it often fails to provide a correct next best action to take, goes into loops, performs redundant actions, does not abort / complete execution etc. If you build your agent around a specific workflow where you predefine the sequence of steps to take - that's a different story. But that approach is far from universal.
I never knew web scraping was so hard. I mean, I ve been trying to scrape together a decent Instagram following for years, but I guess that's not what they mean by web scraping.
Anyway, who knew websites were like the cool kids at school, only loading their content when you scroll into their 'cool zone' and making you jump through hoops to get to the good stuff
How does it handle s?
In movies they do all they can so the AI cannot access the internet, in real life : we need web scrapping man, give it access!
Hi Jason, Your second example doesn't work. AgentQL doesn't open the amazon page.
Fire video🔥🔥🔥
The cost per request for this must be through the roof!
Not if you run llama on olama on your own server or local machine, which is doable. Hopefully soon this cost goes further down with services we can't host.
I mean in comparison to other more specialised ML models currently used in industry, where hundreds of millions, to billions, of requests are being made where cost per request really matters.
What LLMs like this CAN give you is speed to data which is great for a subset projects, from any site while eliminating the need to write selectors and extraction code, but at the expense of high cost per request.
But again we have ML that can deliver that at scale at a fraction of the cost, and at a much higher accuracy.
In a world where simply adding a headless browser to access HTML can 30x the cost per request and kill a project. Adding a LLM is simply a no go.
I’m excited to see the future of LLMs in scraping, but it’s VERY early days but I haven’t seen usecase where LLMs are used for extracting and structuring the data are significantly faster or cheaper better than the existing tech.
Where I have seen LLMs provide practical utility is in the post extraction process where it can be used effectively to extract data from unstructured text which as item descriptions.
I’m excited for the future of LLMs when they become practical and the when the benefits can outweigh the cost in real world applications, but for now I view them as interesting research projects pushing things forward, and as fun tools for smaller personal projects where budgets are not an issue.
I love these kinds of discussions, and last year I attended and spoke at extract summit in Ireland, I hope to be going again this year to hear more about the latest AI use cases.
To wrap up, I think the best use of LLMs I’ve seen is to generate xpaths and to use those inside cheap to run spiders/crawlers. And I’m looking forward to seeing what people come up with next.
i never can understand why you need an api for the search. is there any tool that can just type in the google search bar at all ??
Affiliate scrapping is this more not, what is with you will provide the code?
what's the event about scraping you quoted in your video? please cite the link
I'm here for the thumbnail
Hi, Im building a PDF QA chatbot than answers from 10 long pdfs, Ive experimented with RAG but the chunks I get from the vector db often dont provide the correct context, what can I do to get reliable answers based on my pdfs? will passing the entirety of the pdfs to an llm with a large max tokens help? it doesnt seem effecient to pass the entirety of the pdfs with every question ask.... Im lost please help
Try using claude models. They have huge context windows and some models are pretty cheap
1 try different sized chunks 2 add adjacent chunks to what vector db returns 3 include section titles in the chunks
Great work!! I'm currently tackling web scraping challenges, especially with certain sites where determining the delivery location or dealing with pop-ups obstructing the content poses issues. This often requires user action before the search query can proceed. What do you believe are the most effective methods or tools to overcome these hurdles? Sometimes, even the agentql struggle to resolve these issues.
The cost of making is comparatively so costly than creating a website specific scrapper and maintaining it.
We are in a world where data is the most sought after commodity. And AI is going to make accessing information trivial. I wonder how Big Business will respond. I suspect they'll start pushing for laws to criminalize web scraping in the not too distant future. It will be interesting to see how this all plays out in the years to come.
They would never win with that kind of law. If you show data publicly it's there for the picking. If A.I. can have vision and mimic a human user, it's game over for hiding data.
Scraping already illegal in many countries and if you will try create business around that data they will sue you instantly.
Who wants to be a millionare? Scrape linkedin with AI and become a Zoominfo competitor. Youre welcome.
why don't you go for it
Nice thanks for that
Would it be possible for me to contribute and collaborate on this project? I’m also working on developing a universal scraper myself.
_Advertisements ✅️ Knowledge ❌️_
Average AI Jason video
@@nonstopper True this guy has stopped creating value content
Sadly it does feel like that now. Nearly everything he shows now cost money. While there are free alternatives to most of what he shows.
And all the trolls come out.. never created a piece of value in their lives for anyone else for free, but they rag on content producers who dedicate a lot of time to bring value to others. Thumbs up guys keep your value less contributions coming, you’re really heroes here.
This looks sus selenium could do this why do all this work if gpt 4o is generating selenium scripts faster than an Asian thinking
Great work
You didn't name the title of the speech, the names of the authors or team, got to give credit where it's due... can we get a link to the videos your using? the source? i would like to see the whole thing
Did anyone manage to write the full script? or has access to it?
what About Scrapegraph-ai ???
You deserve Subscribe ✅
I’m afraid I will code an infinite ai gpt loop and accidentally get charged thousands in the process.
Hi I am a student, I want to build same kind of thing for LinkedIn can it possible.
Thank you so much for sharing this knowledge❤
You didn't put the signup link for your universal scraper agent.
thanks for the notes! here is the link: forms.gle/8xaWBBfR9EL5w8jr6
Thank you for share
hey can u make a video on Multion
Cool video
Good walkthrough. Now we need better hardware to run better models so we can stop paying for lobotomized AI
Curl is a protocol 😂
Does this make money ? Or a waste of time?
Scammed by ads… these snake’s oil seller are out of control
My whole startup is based on scraping. I hope this doesn't catch up...
hah what does your startup do?
why are you building a startup every week
Any TLDR here for this nightmare long blob video?
Great work, yet I am not willing to use anything that is propriatary
Hi Jason, thanks for your interesting video. Would it be possible to place your microphon so that we can see your lips when you are talking. For me it's easier to understand english, if I can see them. You huge mic covers so much of your face. Thanks.
Cool.
first lesgoo 🔥
Excelente. Gracias.
Your content is fantastic, your English is top-notch but your accent adds some overhead to understanding. I hope that doesn't feel insulting, your vocabulary and grammar is better than most native English speakers. So an idea... Could you look at using gpt-4o to improve elecution (not just English) in a foreign language? It would be quite useful for many people.
🏆
It’s like pirate game but not to buy it
Total nonsense
Noob
unsubscribed
Ugly banner image
What a waste of time!