@@devlearnllm 😉. This was one of the mostly highly valuable vids ive seen in past few weeks when considering the contents, the top 3 special scrapers i searched hard for mentioned all together in one good video, nice, add to good cheap open source llm like llama 3 and it = $$$ if you know how, data is valuable, things that were not possible or affordably viable for most previously are now, i can do stuff for $12 now that some would pay thousands for, its a wonderful new world! Just finished something awesome with python and Jina and openrouter Llama 3 in 2 days thats gonna double my revenue or more and i dont even know how to code lol, thanks gpt. Jina does have paid api key on the api page btw, 1m free, 580 pages or so it worked out to. but the pricing is so low its insane, 500m tokens or 280 000 pages for $10, destroys firecrawl pricing, which is also good and has its place but much more costly). i think scrapegraph uses llm to parse so its gonna be expensive on tokens right, sending raw website to llms? Ive asked them like you did. I only wish Jina showed menus and internal links and it would be perfect, those have valuable data itself and identifies more valuable pages for more visits like pricing, ill ask if there is a way but i guess can add something cheap to the workflow for that, any suggestions? Prob some python libary, ill ask perplexity lol. Im actually new to the tech side but i see the business value as a marketer so learning fast as i can! its the new gold rush. Great video, subbed looking forward to more. Cheers
RUclips algorithm is just insanely good at what it does, this exactly the content I needed and I think I have found what I want to dedicate my life to as a professional. Thank you for the video, I will buy your course as fast as I collect the money.
chigga dropping bomb content, meranwhile i made a comment analyzer for highly detailed videos which have 100+ comments, and dint have time for going through all. man, sometimes you dont need to build an ironman suit to do simple shet.
If anyone’s having issues viewing the notebook on GitHub, it’s GitHub’s fault. Feel free to clone it (the cod e is there, GH just couldn’t display it recently: stackoverflow.com/questions/78501731/error-nbformat-when-uploading-to-github-from-google-colab)
Greta video! The open source tool looks great! As an aside, I use instructor and pydantic classes to get the LLMs to provide the JSON as I expect it. In my limited experience, dspy wasn't as explicit as I wanted.
Are you you using thos two libraries with agency swarm agentic framework, it uses those a well to ensure performance/quality, if not maybe something you might be interested in, a proper production-capable agentic framework. That with its automation and decision-making capability plus Jina + llms = profit for so many use cases
@@Van-Helssen lol it's not 2014, proxies are recognised by most providers, and they will immediately invalidate the user (if you are scraping as login). There are other ways, using regular ips
Great video, thanks. Is there a way to provide our own scraped data (so we can make sure we use a good stealth scraper and get all the content), and then the LLM analyses it like this?
If I'm going to scrap millions of pages regularly, no way in hell AI would come anywhere close in accuracy and efficiency than a plane Http request or browser load and Jsoup parsing.
I used scrapegraph ai and was also stuck to get cost, but then I just took the cost my making some changes inside the scrapegraphai library as internally the library is using langchain and langsmith so it was calculating the cost.
youtube really know what i am looking :V with python craw a website with LLM is simple just a few line of code . back to 8 year ago i used python tool do a same thing with higher effort . right now , i m trying to mixed data from website/ database with knowledge map for observation view then i could find the short path according its , that will taking less time to read entire book in this field , just focus in some topic but still get the result . nah but you introduced the method with LLM . thanks
This video was really helpful for the people like me looking for webscrapping tools. Though I wonder if jinaAi is really free. Is there any challenge in using it for more number of links? Does it have rate limit on hitting urls with prefix? Any clarification on this is appreciated. : )
🎯 Key Takeaways for quick navigation: 00:00 *🚀 Introduction to web scraping for LLMs in 2024* - Overview of startups pivoting to web scraping. - Mention of Mendable and its "fire crawl" tool for scraping the web using large language models. 02:06 *🔍 Scraping competitors' pricing pages* - The process of scraping competitors' pricing for market research. - Introduction to tools used for scraping: Jina AI, Mendable, and Scrapegraph-ai. 03:01 *🧠 Understanding "Tik token" and its application* - Explanation of tokenization and encoding in web scraping. - Discussion on the cost implications based on tokenization. 05:17 *🛠️ Setting up scrapers with Beautiful Soup and other tools* - Description of different scraping tools and their setup. - Comparisons among Beautiful Soup, Jina AI, and Mendable based on ease of use and output. 07:32 *📊 Running scrapers and analyzing outputs* - Execution of web scraping and evaluation of the output from different tools. - Analysis of readability and format of the scraped data. 09:37 *💰 Cost comparison and effectiveness of scraping tools* - Comparison of costs associated with various scraping tools. - Evaluation of which tool provides the most value for money. 12:53 *🤖 Extracting pricing information using OpenAI* - Utilization of OpenAI for extracting specific data points. - Challenges and strategies in obtaining clean and useful information. 17:20 *🌐 Overview of Scrapegraph for advanced web scraping* - Introduction to Scrapegraph as an open-source project. - Examples of complex data extraction and its accuracy. Made with HARPA AI
This Jina thing is cool. The beautifulsoup scraper is obviously not a solution. Most web pages (Especially articles, media, etc.) have google schema ld+json ready to be extracted though. There are some good python libs for getting the metadata. There are many scraping APIs, and most of them are not worth the cost IMO. phantomjscloud is probably one exception, depending on volume. Otherwise, one must find a good proxy provider and send a bunch of fancy http headers to bypass anti-bot, like you said. Blackhatworld is a great resource for proxies and all manner of other accounts. The whole scraping thing is a giant rabbit-hole. Jina is for sure keeping all that data. It's not a bad plan, actually. I think I may do the same.
There is a paid version thats worth it, check the api page, key at bottom out generates a unique one somehow, you get 1m free then $10 for 500m tokens which is like 280k pages which is insanely low and basically free anyways, crazy valuable tool
Can anyone speak to the architecture or other tools to prevent detection using beautiful soup as he mentioned? What would be the best process to avoid detection and what tools I wish you elaborated there considering it’s the subject of video in large part.
Haven’t watched it fully yet, but I’m really curious to see how it handles the looming threat of model collapse. edit: Yeah it didn’t talk about it. It’s going to be hellish when the internet becomes increasingly flooded with LLM output
Would be cool to make AI website scraper that strips away all javascript bloat from a webpage and converts it into lightweight basic html page while preserving functionality. Would be great as a proxy service to make loading modern web pages fast on slow phones on poor data connections. Modern web is way too bloated. I sometimes manually archive a page by deleting all javascript in notepad++ and modify image embed links to point to locally saved .png files. That takes a long time but I can reduce 5MB page down to 200kB and save that. Would be nice to have smart automated tool to do that in seconds.
I dont know if my question is stupid, but can you tell me can we take snapshots of website and use ocr and llms to scrape the useful info, instead of sending request to that website since it would look more humanly , and also use less requests
Jina is almost perfect.. too bad it's not smart enough to scrape content from "accordions" where you first click to make the content visible. I feel a smart AI scraper should be able to grab that text and determine based on CSS class that it's probably valuable text.. just hidden at the time
these scrapping tools are impressive... but they are not ready for scrapping full website with 100s of webpages.. unfortunately, there is still significant a room for manual scraping..
Dammit stop telling everybody about Jina my secret weapon, just stop, it's my advantage, everybody ignore it it's horrible I swear
TOO LATE
@@devlearnllm 😉. This was one of the mostly highly valuable vids ive seen in past few weeks when considering the contents, the top 3 special scrapers i searched hard for mentioned all together in one good video, nice, add to good cheap open source llm like llama 3 and it = $$$ if you know how, data is valuable, things that were not possible or affordably viable for most previously are now, i can do stuff for $12 now that some would pay thousands for, its a wonderful new world!
Just finished something awesome with python and Jina and openrouter Llama 3 in 2 days thats gonna double my revenue or more and i dont even know how to code lol, thanks gpt. Jina does have paid api key on the api page btw, 1m free, 580 pages or so it worked out to. but the pricing is so low its insane, 500m tokens or 280 000 pages for $10, destroys firecrawl pricing, which is also good and has its place but much more costly). i think scrapegraph uses llm to parse so its gonna be expensive on tokens right, sending raw website to llms? Ive asked them like you did.
I only wish Jina showed menus and internal links and it would be perfect, those have valuable data itself and identifies more valuable pages for more visits like pricing, ill ask if there is a way but i guess can add something cheap to the workflow for that, any suggestions? Prob some python libary, ill ask perplexity lol. Im actually new to the tech side but i see the business value as a marketer so learning fast as i can! its the new gold rush.
Great video, subbed looking forward to more. Cheers
@@devlearnllm thought you said Jira and was so confused..
@@TheBrighamhall imagine lol
🤣🤣🤣🤣
This video made my SaaS possible thanks - I had no idea 5 months ago what LLM scraping was.
I'm glad! I talk further in-depth about web scraping here: app.catswithbats.com/90d4bd29/a15db702
RUclips algorithm is just insanely good at what it does, this exactly the content I needed and I think I have found what I want to dedicate my life to as a professional.
Thank you for the video, I will buy your course as fast as I collect the money.
Thank you for introducing all the latest technology for web scraping!
@LLMs for Devs. I'm from Jina AI. Cool that you are using our reader app. I like seeing the exact use-cases people use that one - very interesting.
Big fan of Jina.
@@devlearnllm hi times are tough. can I borrow 10000k? I need rent money and lost my job as a retail worker at Dicks sporting goods in dallas.
The reader API tip is so clutch. Thank You!
chigga dropping bomb content, meranwhile i made a comment analyzer for highly detailed videos which have 100+ comments, and dint have time for going through all. man, sometimes you dont need to build an ironman suit to do simple shet.
Printing this comment out and putting on my wall
Bro, did you build a comment analyzer for all youtube videos in which all you need to do is post a youtube link? That's a nice project!
@@aarushsaboo1194 its impossible to read thousands of comment bro, and time is money.
I like this format of video...background has a large monitor...Nice video
Thank you so much for the presentation. Just in time with the latest scraping technology
You bet!
Just started wondering about web scraping and here you are.
Thank you.
If anyone’s having issues viewing the notebook on GitHub, it’s GitHub’s fault. Feel free to clone it (the cod e is there, GH just couldn’t display it recently: stackoverflow.com/questions/78501731/error-nbformat-when-uploading-to-github-from-google-colab)
How did I not get your content sooner? Love it!
The transcript at 1:39 states that you are using large sandwich models. This must be a brand new type of model - mouth watering indeed. 😂
Heck yeah 🥪
I replied three times at 1:39. Is he saying 'large sandwich model?🤣
Thanks for adding a new project to my to do list!
Greta video! The open source tool looks great!
As an aside, I use instructor and pydantic classes to get the LLMs to provide the JSON as I expect it. In my limited experience, dspy wasn't as explicit as I wanted.
Good idea
Are you you using thos two libraries with agency swarm agentic framework, it uses those a well to ensure performance/quality, if not maybe something you might be interested in, a proper production-capable agentic framework. That with its automation and decision-making capability plus Jina + llms = profit for so many use cases
Are you referring to DSPy assertions?
I feel really sad. that you publicly talked about Jina. I used to feel special knowing very few people are aware of it lol
my badd
16:05 Worth trying out GPT-4, I find it more accurate at following instruction.
Great intro and work flow. Thanks a lot.
Much appreciated!
Thank you. I'll be "away" for a while while I conquer the...I mean save the world!
Came at the perfect time. Very good video. Thx 😊
So valueable video content! Many thanks for sharing~~
Great demo, thank you!
My pleasure!
Its a cool product but only issue is Jina getting blocked as a bot, so its not making it past the "Are you human?" screen.
But the first problem that all crawls need to face is how to avoid being blocked.
there are ways, maybe I will do a video about that ... but that is a dark art :)
@@PracticalAI_ I'm really looking forward to it!😊
Rotation of proxies and query randomly dude, easy task
@@Van-Helssen lol it's not 2014, proxies are recognised by most providers, and they will immediately invalidate the user (if you are scraping as login). There are other ways, using regular ips
@@PracticalAI_ *residential proxies as you would probably know….
Is the LLM community really not aware of 40 year old Natural Language Pre-processing methods developed for data mining and NLP?
Could you explain it better? I can't see how to connect what you said with this subject
I don't know if the community is aware that this has been a problem to solve for quite some time.
Great video, thanks. Is there a way to provide our own scraped data (so we can make sure we use a good stealth scraper and get all the content), and then the LLM analyses it like this?
Yeah, you can always just build an LLM chain to just extract data. You can find the example in the Google Colab I provided.
pure gold, thanks man!
keep up the good work! - this is an awesome presentation!
TY
If I'm going to scrap millions of pages regularly, no way in hell AI would come anywhere close in accuracy and efficiency than a plane Http request or browser load and Jsoup parsing.
I used scrapegraph ai and was also stuck to get cost, but then I just took the cost my making some changes inside the scrapegraphai library as internally the library is using langchain and langsmith so it was calculating the cost.
That's awesome. How do you get it to work with LangSmith?
GPT 4o can do this now. Just tested and it's awesome.
youtube really know what i am looking :V with python craw a website with LLM is simple just a few line of code . back to 8 year ago i used python tool do a same thing with higher effort . right now , i m trying to mixed data from website/ database with knowledge map for observation view then i could find the short path according its , that will taking less time to read entire book in this field , just focus in some topic but still get the result . nah but you introduced the method with LLM . thanks
Awesome. Thanks for sharing
Great stuff man, thanks a lot!
Cheers!
I don’t know how effective will this be in a long run especially due to the security update of cloudflare to block AI web scraping agents
How do these tools cope with CloudFlare operating on the target site, which attempts to block scrapping?
cant stop the bots i know about seleniumbase for python..... takes some research but... hey
This video was really helpful for the people like me looking for webscrapping tools.
Though I wonder if jinaAi is really free. Is there any challenge in using it for more number of links?
Does it have rate limit on hitting urls with prefix?
Any clarification on this is appreciated. : )
No hard limits as far as I know. Free for now (I think this is intentional), but definitely will change in the future.
🎯 Key Takeaways for quick navigation:
00:00 *🚀 Introduction to web scraping for LLMs in 2024*
- Overview of startups pivoting to web scraping.
- Mention of Mendable and its "fire crawl" tool for scraping the web using large language models.
02:06 *🔍 Scraping competitors' pricing pages*
- The process of scraping competitors' pricing for market research.
- Introduction to tools used for scraping: Jina AI, Mendable, and Scrapegraph-ai.
03:01 *🧠 Understanding "Tik token" and its application*
- Explanation of tokenization and encoding in web scraping.
- Discussion on the cost implications based on tokenization.
05:17 *🛠️ Setting up scrapers with Beautiful Soup and other tools*
- Description of different scraping tools and their setup.
- Comparisons among Beautiful Soup, Jina AI, and Mendable based on ease of use and output.
07:32 *📊 Running scrapers and analyzing outputs*
- Execution of web scraping and evaluation of the output from different tools.
- Analysis of readability and format of the scraped data.
09:37 *💰 Cost comparison and effectiveness of scraping tools*
- Comparison of costs associated with various scraping tools.
- Evaluation of which tool provides the most value for money.
12:53 *🤖 Extracting pricing information using OpenAI*
- Utilization of OpenAI for extracting specific data points.
- Challenges and strategies in obtaining clean and useful information.
17:20 *🌐 Overview of Scrapegraph for advanced web scraping*
- Introduction to Scrapegraph as an open-source project.
- Examples of complex data extraction and its accuracy.
Made with HARPA AI
...The best in-browser AI automation system.
This Jina thing is cool. The beautifulsoup scraper is obviously not a solution. Most web pages (Especially articles, media, etc.) have google schema ld+json ready to be extracted though. There are some good python libs for getting the metadata. There are many scraping APIs, and most of them are not worth the cost IMO. phantomjscloud is probably one exception, depending on volume. Otherwise, one must find a good proxy provider and send a bunch of fancy http headers to bypass anti-bot, like you said. Blackhatworld is a great resource for proxies and all manner of other accounts. The whole scraping thing is a giant rabbit-hole. Jina is for sure keeping all that data. It's not a bad plan, actually. I think I may do the same.
Thank you!!!!!!
Great presentation! I'm surprised about jin ai free scraper that doesn't require an API?!! I guess it might be shut down soon for public access
There is a paid version thats worth it, check the api page, key at bottom out generates a unique one somehow, you get 1m free then $10 for 500m tokens which is like 280k pages which is insanely low and basically free anyways, crazy valuable tool
@@jarad4621 oh wow! it's amazing! thanks for clarification
Amazing video, thank you
Can anyone speak to the architecture or other tools to prevent detection using beautiful soup as he mentioned? What would be the best process to avoid detection and what tools I wish you elaborated there considering it’s the subject of video in large part.
Underrated glad I found
How well does Jina do with bigger sites with anti-bot protection?
Jina love it...
Im curious how you handle pages where the content exceeds token window
I'm sure Firecrawl or Jina would have a rolling context window for extraction. It's an easy thing to implement.
thansk, but difference or what is better gina reader or Scrapegraph-ai
Haven’t watched it fully yet, but I’m really curious to see how it handles the looming threat of model collapse.
edit: Yeah it didn’t talk about it. It’s going to be hellish when the internet becomes increasingly flooded with LLM output
Can you please fix the camera please already feeling dizzy within 60 seconds due to constant camera movement!
Working on it. Just need to find the setting in DJI Pocket 3 to slow down the tracking speed
Would these work for a dynamic website
Would be cool to make AI website scraper that strips away all javascript bloat from a webpage and converts it into lightweight basic html page while preserving functionality. Would be great as a proxy service to make loading modern web pages fast on slow phones on poor data connections. Modern web is way too bloated. I sometimes manually archive a page by deleting all javascript in notepad++ and modify image embed links to point to locally saved .png files. That takes a long time but I can reduce 5MB page down to 200kB and save that. Would be nice to have smart automated tool to do that in seconds.
Gold!
Can Jina handle sites with lazy load? Looking at dealership websites
Not that I'm aware of. But Firecrawl has Actions now.
What are the good and easy to use tools with langchain? Llm is not very useful without such tools, even it has no idea about the date today.
Using jina now hehe. Does anyone know if you can get better results from amazon?
lol that's awesome!
I guess selenium is still the choice for javascript heavy websites... any tips on this?
I dont know if my question is stupid, but can you tell me can we take snapshots of website and use ocr and llms to scrape the useful info, instead of sending request to that website since it would look more humanly , and also use less requests
Yeah you can probably do that!
@@devlearnllm thanks 🤝
so what did you find out about scrapegraph ai
performance , tokens
I wonder if you would update it to be able to use gpt-4o-mini as its much cheaper
yep
you havent updated us on how much does scrapegraph-ai takes in comparison
Ah shoot I forgot about that.
Hi, I'm trying to scrape webdata from my Org Docs which is accessible only within VPN. Failed to goto 'docs url'. Can you help me with this ?
"The entire internet hates him for this one simple trick"
9/10 prompt engineers recommend this
Jina is almost perfect.. too bad it's not smart enough to scrape content from "accordions" where you first click to make the content visible. I feel a smart AI scraper should be able to grab that text and determine based on CSS class that it's probably valuable text.. just hidden at the time
That's too bad. What's the alternative?
these scrapping tools are impressive... but they are not ready for scrapping full website with 100s of webpages.. unfortunately, there is still significant a room for manual scraping..
i tried to read or download the Web_scraping_for_LLM_in_2024.ipynb but its not readable, can you replace it ?
ok i can read it in colab
What’s the best way to get in touch?
Details in the video’s description
Damn, bro get ready for heavy lifting) baldness is coming
Been there, you’ll look much much better!
Lmao thanks brother
it works in portuguese?
Fix your camera thats annoying AF
Sounds like you don’t like the swiveling on it
Please do not move the camera all the time
Definitely loosen up the tracking to center. OSBTail?
It's actually built-into the DJI Pocket 3 camera. I just had it for a few weeks. Just need to find the settings for it.
@@devlearnllm change the follow speed to slow instead of fast.
broken link to github
Yeah there’s something weird with GitHub not displaying the notebook right. The link is the same.
The motion-tracking is a bit distracting.
none of these seem better than Trafilatura?
scrapegraph looks cool though
@@flor.7797 How's your experience using Trafilatura? I haven't tried that yet
@@devlearnllm I’m more into main content extraction and boilerplate removal. There isn’t one size fits all unfortunately
such an inefficient and unreliable way to scrape the web
That s basic stuff, I feel like it s 2023, and I was late to the party too
"how to block these fuckin idiots AWS servers to protect your website" next
Who else was disappointed after clicking the thumbnail and seeing a dude?
me
The camera moves too much
its the worst
@@devlearnllm Aside from that, great video.
Fire crawl is to to expensive
You should definitely wear pants.
intersting! although I was distracted by your attire... Seriously I was not born 30 years ago man, but can we dress a bit better for a presentation?!
Lol what's wrong with my wardrobe
@@devlearnllm Hi! Sorry, but think about it. You are doing everything right, then why dress up like that? Why not better?