Thank you for your video and the script is good, but it should be more scalable for people who wish to download many zips within a city and should not have a hard coded number of pages to check
What you want to do is to download the html page so that you can read it with beautifulsoup and extract the data you want? If not, could you please tell me why you are doing this ?
@@jiejenn I just do not understand why you are doing this and would like to (I am not criticising as I am a beginner). I just discovered playwright with your video and would like to understand why it is used there and not Selenium for instance. What I was guessing is that you export the html of each page so that you can read the html with beautifulsoup afterwards. I was thinking you were doing this to not get "HTTP Error: 403"
Thank you for playwright videos. It is helpful for freelancers
I'm sure it will help. Thanks for watching!
great tutorial! hope to see more web scraping tutorials in your channel. i'm a selenium user and trying out playwright...
Thank you for your video and the script is good, but it should be more scalable for people who wish to download many zips within a city and should not have a hard coded number of pages to check
Can you make video on web scrapping for google photos videos ?
I will look into it.
Amazing tutorial bro, thank you !
👍
thank you its really helpful but what if i have to do filtration in zillow before i scrape it?
Create a loop and use if statement to filter the records.
@@jiejenn but you was scraping for sale but i want to be sold and the size between 1 acre to 4 acre
What you want to do is to download the html page so that you can read it with beautifulsoup and extract the data you want? If not, could you please tell me why you are doing this ?
Not sure if I understand your question, can you be more specific?
@@jiejenn I just do not understand why you are doing this and would like to (I am not criticising as I am a beginner). I just discovered playwright with your video and would like to understand why it is used there and not Selenium for instance. What I was guessing is that you export the html of each page so that you can read the html with beautifulsoup afterwards. I was thinking you were doing this to not get "HTTP Error: 403"
yes great!