Amazing tutorial as always! Can't wait to try this in production! For any potatoes like me on older python versions here are some changes you have to make: 1. import 'from typing import Optional, List' 2. update 'rating: float | None' to "rating: Optional[float] = None'
Thank-you for taking the time to "make things a little bit bigger". So many channels have tiny fuzz in the corner of the screen and a huge empty space.
This analysis is on the client side. Allows you to continue checking the APIs for any exploits. That allows you to find the connection medium that exchanges client data to the server
In most cases SSR is just for the first page, so robots get their mouth filled with right stuff. Next pages are hydrated on the client side over the API. This is the evolved pattern.
I’m new to data scraping, so please excuse my lack of knowledge, but I wanted to ask: since SSR delivers fully rendered content directly to the client, wouldn’t it be simpler to scrape data from SSR websites compared to CSR?
@@pedrolivaresanchez No, CSR pages typically include endpoints that return clean, structured data in formats like JSON (as demonstrated in the video). In contrast to SSR, where you need to parse through HTML to extract the desired data (which also includes a bunch of unwanted CSS and JavaScript).
100% percent agree that front end scraping sucks. I remember having a hard time with python selenium because of different class names being generated with inconsistent names (maybe just to discourage scraping). For my last scraping project I used Deno Typescript. The API was only returning the HTML page for the web app and I had to install a proxy certificate on my phone and read those mobile requests that a actually returned JSON objects. You have to get creative from time to time, but there is no such thing as an unscrapable APIs😅. Thanks for sharing your workflow!
Great content as always, thanks! I'm looking forward to the fingerprint video. If I may make one request, I would love to see a video about decrypting the response when it is encrypted. I’m currently trying to deal with a website like that, and I believe the decryption process must be hidden somewhere in the JavaScript since I can see the data on the website but can’t figure out how to crack it. Thanks again for your videos, man.. I really appreciate them!
@@brendanfusik5654 thanks for your reply.. my problem was in the end actually encoding with base64 and protobuf layers, and not encryption.. but thanks anyways
I also scrape data as a living, particularly job data. This is all great information. Another really good point is sometimes you have to loop over tags In the front end to extract an ID for each item. Building robust solutions that can withstand changes is a learned skill.
@ronburgundy1033 If you're working for yourself, it could be difficult and take some time, You can build up a bunch of data that you have scraped and try to sell the data. You can sell your services to a company who wants something scraped. You can work for a company that does their own scraping. Honestly, there's a lot of ways to go about it but think of it as providing a service and providing data and you can come up with some good Solutions. In regards to learning, find some sites that you want to try and scrape and start there when you have a problem ask on stack overflow or something. There is also no code options like uipath
Very interesting. I didn't know about the TLS fingerprinting (but I did know about other kinds of fingerprinting). I agree that most sites are probably fairly easy to scrape but some seem straight impossible. There was one site that I couldn't get around with. It's anti-bot protection was super good. Scraping is such a deep and deceiving topic. It looks simple but there's so much behind it.
New to your channel. I really like your videos. Straight to the point with no fluff. I've always had a bit of a weird habit of running apps through packet sniffers just to see their API requests. I found it fascinating. Although I never really did anything with them. I've noticed that many modern websites like Instagram dynamically load data in a weird way that that cannot be seen using the inspector. Do you have a video on this?
@@darz_k.true, at least he didn’t ask for a use case for data collected this way - an actual question worth criticism for lacking creativity. His was valid, technical
Well client side apps with an api is really easy like you shown, usually its server side pages where you can't grab the data from any api or xml request, so you really have to scrape whatever data between the html elements you get.
Hi John. Thank you so much for these videos. It enabled me to actually create something without looking at thousands of html. One question though, there seem to be some apis that are invisible in the inspect, however I know it is there. Is there a way to uncover these hidden apis?
As a back end developer this is honestly unintentionally hilarious. Yeah you've really got those websites man. 18.27 to make yourself sound like you don't know what you are talking about. Any backend change it all breaks, ip lockdown it all breaks, token authentication it all breaks, oauth it all breaks. You are relying on the developer's grace to give open access, not your skill to access it. It's a public API to serve a website , you aren't hacking it providing a new id to serve different content. This is like a kid thinking they have hacked Google by modifying the URL parameters 😂
Absolute Banger! One of the best videos i've seen on the topic. Of course i'm lazy AF and just use AI scraping, and Zyte to unblock, but this is 100% an awesome way to keep costs down to the absolute ground if you have the time to spare. (when did you get a green screen?)
I am a passionate webscraper as well with a few years of experience. Hardest thing to scrape in my view is online PowerBI tables (publicly avaliable data), its almost impossible to fetch the data as the backend doesn't reponse. Have you cracked it? If so, could you make a video of it some day?
with the websites i try to scrape, i can find interesting "responses" like you've mentioned by monitoring network traffic, but when i try to directly access that API request URL in my browser, I will encounter variations of this: ""message": "403 Forbidden - Valid API key is required"".... does this just mean my target websites are intentionally preventing webscrapers from accessing them in this way? What I am doing is using playwright to tediously navigate through every page and scrape the content of each page...
Assume this relies on the site being a spa and having json sent? I'm looking at a site that seems to respond with html :/ Think that would also apply to SSR sites, right?
Hi Johan, thank you for the great videos. I have a RAG project(ai assistant for an English aticle website(for English language learners) that I need to use all articles as a vector database for my RAG agent . How should I automate this for free? Is there a free ai Webscrapper to build an ai assistant? Or better to code an ai scapper from scratch instead of using an external platform to automate this for my project.
The best part of all of this is the scammers loss aversion being used against them in the same way they use it against victims. Unlike the normal scambait shenanigans they probably feel an immense sense of loss afterwards since they already feel like the money is theirs. Overall really entertaining
great help your tutorials! alot of sites switching to cloudflare and they detect scraping alot of the times. do you have any tutorials on hls dash segmented video?
I have one more question: Do we need to get permission from any website or contact them via email before webscapping of their content? Sometimes their guidelines and terms of use are vague. Do you take permission your videos? I ask because I want to use their data to feed into a RAG project to use as a vector data repository for semantic search for ai.
Yes you absolutely need to get permission. This is their site, they built it, it's their data not yours. "How I STEAL data from 99% of sites" is the correct title for this video... What a scum you are, John. Build your own app instead of basing it on theft.
Hey John , very good video ! I was wondering if I can help you with more Quality Editing in your videos and make Highly Engaging Thumbnails which will help your videos to get more views and engagement . Please let me know what do you think ?
I really liked the video and I noticed that a lot of it is reverse engineering of the site or APIs. But what can I do when I experience blockages because the site uses cloudware for example? Thank you very much for your contribution!
I've been trying to scrape some data through an API. But after each hour the cookie needed in the headers expires. How can i extract the cookie automatically instead of manually copying it from the latest curl?
@JohnWatsonRooney I am finding some JSONs now, thank you. One issue I am running across, is it's not consistent. I have found about two items with this information loading but the rest don't have them. Why might this be? I do see a GET with a 404 called "current.jwt?app_client etc." do you have any videos on possible road blocks to scraping sites, in the context of the type of scraping you use in the video?
John truth be told you are very good at your craft but you have never done an End to End project with deployment and APIs that is been hosted... you may want to look into that if only you can do a end to end project that is deployed with automated scraping using cron jobs as a scheduler, trust me it will boost your viewership
Its good for small websites but what about linkedin and other big data websites. You can't reverse engineer beacuse there is no hr file. How can we reverse engineer them.
I learned a ton from this and it changed the way I look at scraping tonight. Already worried about the urge to break TOS with certain subscription based websites I use for work lol
@@Michael-kp4bd It's not illegal to scrape something just because it's in a website's TOS. This has been hashed out in court many times. Now what you do with that scraped data if it's for profit is where the law comes in. They can ban your account though so you've got to decide if you care about whatever account the scraping can be tied to if you need one
@@namegoeshere2805 I know it’s not illegal, I’d be worried about civil action. Getting sued is not fun. There’s likely a high threshold before it’s worth it for the company to go after you all-out. I guess they’d probably give you a chance to shut your shit down with a cease and desist before legal action is taken against you, forcing you to pay the big bucks just to hire a lawyer. As long as you didn’t go too far to begin with to invoke some kind of alleged damages that they seek outright
I'm scraping data from a shipping line's website, but I need to login to get the bearer token and enter that into my python code to all the API calls to work. I need to be able to login via python, and obtain the access token, is this possible?
This is gold. You have shown your thought process and by following it I can pick up the whole web scraping concept easily. Love your video John.
You are the best teacher to learn scraping
Amazing tutorial as always! Can't wait to try this in production! For any potatoes like me on older python versions here are some changes you have to make:
1. import 'from typing import Optional, List'
2. update 'rating: float | None' to "rating: Optional[float] = None'
Thank-you for taking the time to "make things a little bit bigger". So many channels have tiny fuzz in the corner of the screen and a huge empty space.
Best Web Scraping Channel on RUclips.
Just scraped a complete site with 70 lines of code.
This technique kind of only works for Client-Side Rendered sites. Not SSR sites (server side)
This analysis is on the client side. Allows you to continue checking the APIs for any exploits. That allows you to find the connection medium that exchanges client data to the server
It would struggle with HTMX too, heh.
@@abg44 This won't work even for that . Because he will be blocked by anti-bots when hitting non cached data .
Nice I ran into the same curl 403 issue while writing a GoLang scraper and used cf-forbidden to complete my request.
this technique is really for CSR sites. with more and more sites switching to SSR it's not really possible to just go straight to the APIs
In most cases SSR is just for the first page, so robots get their mouth filled with right stuff. Next pages are hydrated on the client side over the API. This is the evolved pattern.
I’m new to data scraping, so please excuse my lack of knowledge, but I wanted to ask: since SSR delivers fully rendered content directly to the client, wouldn’t it be simpler to scrape data from SSR websites compared to CSR?
@@wkoell "In most cases SSR is just for the first page".
Why talk when you have no idea what you're talking about? 😂
That's exactly what I was going to say.
@@pedrolivaresanchez No, CSR pages typically include endpoints that return clean, structured data in formats like JSON (as demonstrated in the video). In contrast to SSR, where you need to parse through HTML to extract the desired data (which also includes a bunch of unwanted CSS and JavaScript).
100% percent agree that front end scraping sucks. I remember having a hard time with python selenium because of different class names being generated with inconsistent names (maybe just to discourage scraping). For my last scraping project I used Deno Typescript. The API was only returning the HTML page for the web app and I had to install a proxy certificate on my phone and read those mobile requests that a actually returned JSON objects. You have to get creative from time to time, but there is no such thing as an unscrapable APIs😅. Thanks for sharing your workflow!
Scraping, btw
This is a masterpiece. More videos like this john. The 20 minute videos peppering in the end point manipulation explaination is genius.
actually this was the best way of scraping and it also makes the structuring of data easier for me also. i used this method already more than year ago
I think, this was your last scraping video. Nothing else has to be told about this topic.
Thank you!
Great content as always, thanks! I'm looking forward to the fingerprint video. If I may make one request, I would love to see a video about decrypting the response when it is encrypted. I’m currently trying to deal with a website like that, and I believe the decryption process must be hidden somewhere in the JavaScript since I can see the data on the website but can’t figure out how to crack it. Thanks again for your videos, man.. I really appreciate them!
you need a secret key they keep hidden commonly in .env files not just floating around in javascript.
@@brendanfusik5654 thanks for your reply.. my problem was in the end actually encoding with base64 and protobuf layers, and not encryption.. but thanks anyways
@@brendanfusik5654isn’t that a no-go? Pardon my ignorance
I also scrape data as a living, particularly job data. This is all great information.
Another really good point is sometimes you have to loop over tags In the front end to extract an ID for each item. Building robust solutions that can withstand changes is a learned skill.
How can I learn this and do it as a living? Can you make 20k a year ?
@ronburgundy1033 If you're working for yourself, it could be difficult and take some time, You can build up a bunch of data that you have scraped and try to sell the data. You can sell your services to a company who wants something scraped. You can work for a company that does their own scraping. Honestly, there's a lot of ways to go about it but think of it as providing a service and providing data and you can come up with some good Solutions.
In regards to learning, find some sites that you want to try and scrape and start there when you have a problem ask on stack overflow or something. There is also no code options like uipath
Thank you for this. Really thorough and excellent introduction into web scraping.
And here I was about to start scraping and parsing HTML tags.
yeah I can’t wait to see tls fingerprint video 😆
Looks like your video finally made them add some security to their API. Well done Adidas 🎉😄
Just the cureq tip would have saved me a lot of work on figuring out the right headers and cookies for the fingerprint
Sick video man so easy to understand and execute, loads of ideas coming to mind
thanks for this! i thought this is yet another BeautifulSoup -type scraping. so detailed explanation
Thanks a lot for this John, really helpful brother. Bests
Very awesome john, insight full content, keep it up,
I'm trying to continue watch your almost any video, It's very helpful
Very interesting. I didn't know about the TLS fingerprinting (but I did know about other kinds of fingerprinting).
I agree that most sites are probably fairly easy to scrape but some seem straight impossible. There was one site that I couldn't get around with. It's anti-bot protection was super good.
Scraping is such a deep and deceiving topic. It looks simple but there's so much behind it.
New to your channel. I really like your videos. Straight to the point with no fluff.
I've always had a bit of a weird habit of running apps through packet sniffers just to see their API requests. I found it fascinating. Although I never really did anything with them. I've noticed that many modern websites like Instagram dynamically load data in a weird way that that cannot be seen using the inspector. Do you have a video on this?
Very informative, thanks! I did not know about curl cffi but definitely going to check it out now.
Yo you are the best youtuber, when it comes to scraping
Thanks for this! This is exactly what I needed!
John, I learned a ton from this and I had a lot of fun. Thanks
Great video John, thanks!
Nice work mate, cheers for sharing.
I like your dress up, the earphone, the light and the color of your shirt, it is suitable with the grey background of command line tool
important to know this only works as long the backend from the site does not have any anti CSRF tokens on the API requests
Top top level materials and content as always. Thanks a lot.
You earned a new subscriber!
This just saved me so much python coding and HTML scraping for financial data on interactive sites with Java. God bless you :D
Great information and video! I had no idea about TLS fingerprinting.
Another great video; keep up the great work.
Thanks Alan
New to this channel, just wanted to say that your content is so full of quality!!
Great vid! Easy to follow, and comprehensive!
What do you when a website consists of hundreds of static html pages held together with scotch tape and php?
write something to parse and collect from html, hope to hell they don’t change the format of their site
Maybe, build something yourself, and stop consuming other peoples work?
@@darz_k.good advice man… why are we consuming this informative video. It’s not our work
@@viIden Even for a logical fallacy, that's weak.
Must do better.
@@darz_k.true, at least he didn’t ask for a use case for data collected this way - an actual question worth criticism for lacking creativity. His was valid, technical
Well client side apps with an api is really easy like you shown, usually its server side pages where you can't grab the data from any api or xml request, so you really have to scrape whatever data between the html elements you get.
Another great video!
Thank you.
Hi John. Thank you so much for these videos. It enabled me to actually create something without looking at thousands of html. One question though, there seem to be some apis that are invisible in the inspect, however I know it is there. Is there a way to uncover these hidden apis?
The best = John
what do you do with the data you scrape?
No hate, I enjoy your content but saying "REVERSE ENGINEER" this api isn't the term you can use for projects like these.
Well, he used it so clearly he can 😃
@@JakubSobczak 🤡
Fair enough, i see where you’re coming from. This example was more just seeing and using rather than anything else.
I'd say you're reverse engineering the usage of the API as a client..
As a back end developer this is honestly unintentionally hilarious. Yeah you've really got those websites man. 18.27 to make yourself sound like you don't know what you are talking about. Any backend change it all breaks, ip lockdown it all breaks, token authentication it all breaks, oauth it all breaks. You are relying on the developer's grace to give open access, not your skill to access it. It's a public API to serve a website , you aren't hacking it providing a new id to serve different content. This is like a kid thinking they have hacked Google by modifying the URL parameters 😂
Absolute Banger! One of the best videos i've seen on the topic. Of course i'm lazy AF and just use AI scraping, and Zyte to unblock, but this is 100% an awesome way to keep costs down to the absolute ground if you have the time to spare. (when did you get a green screen?)
I am a passionate webscraper as well with a few years of experience. Hardest thing to scrape in my view is online PowerBI tables (publicly avaliable data), its almost impossible to fetch the data as the backend doesn't reponse. Have you cracked it? If so, could you make a video of it some day?
Sadly, this has an expiration date. Sites are moving more and more towards SSR and even hydration is sometimes html.
with the websites i try to scrape, i can find interesting "responses" like you've mentioned by monitoring network traffic, but when i try to directly access that API request URL in my browser, I will encounter variations of this: ""message": "403 Forbidden - Valid API key is required"".... does this just mean my target websites are intentionally preventing webscrapers from accessing them in this way?
What I am doing is using playwright to tediously navigate through every page and scrape the content of each page...
I'm subscribing but show us your dog in the next one! 😅
Haha
Assume this relies on the site being a spa and having json sent? I'm looking at a site that seems to respond with html :/
Think that would also apply to SSR sites, right?
Yes that’s right, but if it’s ssr look in the page source there’s often a lot of json data in there to save parsing loads of html tags
@@JohnWatsonRooney perfect, thanks :)
Hi Johan, thank you for the great videos. I have a RAG project(ai assistant for an English aticle website(for English language learners) that I need to use all articles as a vector database for my RAG agent . How should I automate this for free? Is there a free ai Webscrapper to build an ai assistant? Or better to code an ai scapper from scratch instead of using an external platform to automate this for my project.
The best part of all of this is the scammers loss aversion being used against them in the same way they use it against victims.
Unlike the normal scambait shenanigans they probably feel an immense sense of loss afterwards since they already feel like the money is theirs. Overall really entertaining
How woukd you get TikTok ads that are in app? The web doesnt have slonsored vids. Wonder how to scrape these
Basically you need to run a mitm proxy to intercept the requests made by the app. I’ve not done it myself though
Hi, i wanted to follow this tutorial, but it seems that the search json response is no longer available, any thoughts on how to fix that?
great help your tutorials! alot of sites switching to cloudflare and they detect scraping alot of the times. do you have any tutorials on hls dash segmented video?
I have one more question: Do we need to get permission from any website or contact them via email before webscapping of their content? Sometimes their guidelines and terms of use are vague. Do you take permission your videos? I ask because I want to use their data to feed into a RAG project to use as a vector data repository for semantic search for ai.
Unless OpenAI or some other LLM provider loses a lawsuit for scrapping publicly available data, I doubt it should be an issue.
Yes you absolutely need to get permission. This is their site, they built it, it's their data not yours.
"How I STEAL data from 99% of sites" is the correct title for this video...
What a scum you are, John.
Build your own app instead of basing it on theft.
Hey John , very good video ! I was wondering if I can help you with more Quality Editing in your videos and make Highly Engaging Thumbnails which will help your videos to get more views and engagement . Please let me know what do you think ?
I think 99% people need UPC code price tile link
I really liked the video and I noticed that a lot of it is reverse engineering of the site or APIs. But what can I do when I experience blockages because the site uses cloudware for example?
Thank you very much for your contribution!
I've been trying to scrape some data through an API. But after each hour the cookie needed in the headers expires. How can i extract the cookie automatically instead of manually copying it from the latest curl?
Can you make a video to explain the waterfall stuff at the bottom of (fetch/xhr). I can see whenever you click it comes up as grey
What if the XHR requests are hidden, when I go to response, it just says false.
There will be lots of xhr requests - have a look through them all and see if any have the data you need. It doesn’t work for all sites
@JohnWatsonRooney I am finding some JSONs now, thank you. One issue I am running across, is it's not consistent. I have found about two items with this information loading but the rest don't have them. Why might this be? I do see a GET with a 404 called "current.jwt?app_client etc." do you have any videos on possible road blocks to scraping sites, in the context of the type of scraping you use in the video?
Can you please make a video of how to handle SSR scraping?
Thanks much for this. Now, I am getting {"error":"Anti forgery validation failed"} on a particular site - any thoughts on how to walk around it?
Brilliant Video!
do you have a course ?
This is a legit video! 💪💪
great, next make a video on how to scrape youtube data
John truth be told you are very good at your craft but you have never done an End to End project with deployment and APIs that is been hosted... you may want to look into that
if only you can do a end to end project that is deployed with automated scraping using cron jobs as a scheduler, trust me it will boost your viewership
Yeah seen this same video a hundred times
I’m prerry good with this format. He is probably one of the few youtubers who cover latest scraping techniques
@@kexec.please share these “latest scraping techniques” you speak of
@@stickyblicky11 Did you even watch the video? 💀
@@kexec.Yeah it’s pretty much standard practices 💀
ROOOOOONEY!
Its good for small websites but what about linkedin and other big data websites. You can't reverse engineer beacuse there is no hr file. How can we reverse engineer them.
Hey john still waiting.
Probably best to avoid scrapping websites like LinkedIn unless you want to get banned from the platform or sued
@john do you have a course and how can get in touch with you
Nice, but this was like a scraper's dream and a very easy example.
selenium wire, bro. just sniff json packets and catch them.
He did a video on that
From where do you get web scraping work?
wow, so cleen. goodbye beautiful soup.
I have never seen this approach, but it seems a lot easier than faffing about with website designs and puppeteer or selenium.
I learned a ton from this and it changed the way I look at scraping tonight. Already worried about the urge to break TOS with certain subscription based websites I use for work lol
If it’s in the TOS you’re really just attempting to be lucky by going under the radar. Definitely unadvisable 😅
@@Michael-kp4bd It's not illegal to scrape something just because it's in a website's TOS. This has been hashed out in court many times. Now what you do with that scraped data if it's for profit is where the law comes in. They can ban your account though so you've got to decide if you care about whatever account the scraping can be tied to if you need one
@@namegoeshere2805 I know it’s not illegal, I’d be worried about civil action. Getting sued is not fun. There’s likely a high threshold before it’s worth it for the company to go after you all-out. I guess they’d probably give you a chance to shut your shit down with a cease and desist before legal action is taken against you, forcing you to pay the big bucks just to hire a lawyer. As long as you didn’t go too far to begin with to invoke some kind of alleged damages that they seek outright
Will this work in any websites? Like instagram, linkedin??
Please design a course for vetwrenes not cider to dive into and learn ❤. Ok s suggest tech start to learn and where to start from
Do you have a github with code examples?
Even my grandma can do this.
is parsing html the best way to scrape server-rendered pages?
Incrediable as always. Going to AI / DB this - a much better process than Scarpy. Cheers - 100z
what we can do with this data ? any idea plzz
thank you very match ☻
Excellent Work :-)
What about sites without json, fhat just serve a document
Can u tech us who to scrape website with cart iam work on one since months but i cant add product to cart by requests
Why is scraping the html not going to work at all?
Great content
ProxyScrape provided a free list of proxies, and they all failed. 😂
hey guys, why i dont see search?q=boots in dev tools ? im newbie, thank for heping.
I'm scraping data from a shipping line's website, but I need to login to get the bearer token and enter that into my python code to all the API calls to work. I need to be able to login via python, and obtain the access token, is this possible?
Try submitting a post request to the auth login endpoint
What Snozcumber said, or you can automate signing with a headless browser and copy the cookies
@@Pigeon-envelope Thanks dude
@@hurtado-w9c cheers, very helpful!
How to deploy a selenium script? I couldn't do it.
this is not gonna work if they only let their api consumed restriction by their domain
and also with the rate limit setup on the backend