Always Check for the Hidden API when Web Scraping
HTML-код
- Опубликовано: 31 июл 2021
- DISCORD (NEW): / discord
If this method if available, its the best way to scrape data from site. I will show you how to find the API endpoint that we can use to directly get the JSON data that is being sent from the server, before JavaScript gets its mucky paws on it and makes it look like what we see in our browser. Its quick and simple, and with just a few extra tips and techniques it can transform your web scraping.
Scraper API: www.scrapingbee.com/?fpr=jhnwr
Patreon: / johnwatsonrooney
Proxies: proxyscrape.com/?ref=jhnwr
Hosting: Digital Ocean: m.do.co/c/c7c90f161ff6
Gear I use: www.amazon.co.uk/shop/johnwat... - Наука
I've doing this for years as a self taught programmer, there are some little tricks you did here that i didn't know, thank you for the video.
Glad it was helpful!
It's my first year in programming and there was nothing new actually. I don't even think that pain was worth it I'd just make the scraper in js and make it return a json string.
But I guess that would be useless for bigger projects. I'd just do it in js if I want like an actual product list like this.
@D R lol
@D R how do you “block” scraping??
This single-handedly cut the running time of my program from literal hours to a couple of minutes, cannot thank you enough!
Brilliant, thanks!
I've been using this trick for a while now, and I've learned it from you, so thanks. Amazing work man
That’s great 👍
I rarely praise anything, but this tutorial was SO good! Well explained, no filler. In 7 or 8 minutes you guided me through finding the hidden information I needed, which tools I need to use and how to automate it. This tutorial gave me enough confidence to try to write my first Python script! Within hours I built a scraper that can pull all metadata for a full NFT collection from a marketplace. Without this video it would have taken days/weeks to discover all of this
That awesome! thank you very kind!
"In 7 or 8 minutes"
More like 11
@@channul4887 Nope! I had different goals so no need to follow the full tutorial
I even added it in my playlist. Great video. Definetely starting to love API's more and more.
Because of this video, I was able to start my own rockets and satellites company. In only four hours, I started the company, launched thousands of rockets, and now I have my own interplanetary wireless intranet from which I can control the entire galaxy! Thanks again!
I'm not a very experience programmer, I've been doing it recreationally for like 2 years on and off but I did a lot of webscraping and this is just a really neat piece of knowledge that I wouldn't have come across own my own. Thanks.
Nice video. It’s worth noting as well that many APIs will pageinate, so rather than checking how many total results exist and manually iterating over them - you just check to see if the ‘next page url’ or equivalent key exists in the results and if so, get that too until it doesn’t exist anymore, merging/appending each time until the dataset is complete 👍
Yes you’re right thank you!
In fact you can see at 05:33 that this particular API does just that - there's "nextPageURL" and "totalPages" at the end of the response JSON.
This is a true gold nugget. Thanks for demonstrating how to easily view the request in Insomnia and auto-generate code!
Please ignore my first comment. I checked out your first video in this series and learned about using scrapy shell to test each line of code. With that I found the bug in my code. The code worked PERFECTLY as advertised. Your the man! Much thanks!
I just want you to never stop creating such informative video. For god sake.
This is, no joke, the most useful video I ever saw on RUclips!
Loved everything about this video! Great delivery style, production quality and interesting topic for me. First time visitor to this channel and not a Python user (thanks, RUclips, for your weird but helpful predictive algorithms).
Thank you! I’m glad you enjoyed it
My man, this is EXCACTLY what I was looking for. Had to do some extra steps, but a little try-and-error and basic understanding of HTTP was enough to solve my problem. Thank You!
Hidden API is by far the easiest way to scrap a website!!!! Thanks bro!!! Big Clapping for you!!!!! I've followed all your procedures &finally i did it.
really nice and helpful tips in an actual topic with a sight-pleasuring recording quality, thank you for your time and efforts.
Awesome advice, a lot of people skip checking the requests when building scrapers but it can save a lot of time when it works
I like how you regularly start sentences with 'you might think' assuming we are all idiots. I approve, glad smart people, like you, make time to explain to us plebs how the world works. Apprecated.
Hey, thanks. I do my best to explain things how I would have wanted to be taught
I always come to your channel for these excellent time-saving tips and tricks! Thank you!
Glad you like them!
Thanks for sharing! This has helped me a lot. After struggling for weeks with selenium, I was able to apply this technique fairly quickly, and am now using it as source to scrape ETF-composition data to feed directly into a PowerBI dataset. Much appreciated!
This video was the answer to my prayers!
The next best option was to watch an one hour video and hope they would teach what you taught... In 10 minutes!!! 👏👏👏
Thank you glad it helped!!
This is such useful content that shows how much value experience gives - thank you for the straightforward and realistic tutorial!
I love you, John. You're awesome! Thanks for being unique and producing quality content.
I was struggling with selinium to extract a table from javascript website. This video saved so much time. Thank you
HUGE! I've been looking for this info for 2 days. 12 mins of your video better than anything else, by far. Thumbs up and thank you so much
Thank you !!
@@JohnWatsonRooney you saved me a lot of time. I'm new to the topic, next days I'll take a look at you channel
I was like "hm, okay, yeah" to "HOLY SHIT, THATS THE DOPEST SHIT I'VE EVER SEEN"
I'm starting to get into this niche and I intend to learn more Python and SQL (you know, Data Analysis stuff/jobs) and I'm doing a project to scrape NBA statistics but there are always some errors and it ends up taking a long time.
BUT THIS IS GOLD CONTENT, KEEP IT UP
There is always something new to learn.
I’ve been spending hours to grind such an information by hand-writing the whole program to get my result ;D
Thanks!
That was incredibly helpful and exactly what I needed today. Your presentation is very clear. Thank you!
Very, very interesting - I'm going to give this a go myself. Cheers for another great video John.
Wow, thanks for this excellent tutorial! I just spent all this time writing cumbersome Selenium code, when it turns out all the data I was looking for was already right there!
Great! That’s exactly what I was hoping to achieve with this video
bro, you're a game changer and i love you. if i ever see you in person ill offer to buy you a beer, or lunch, coffee whatever
OMG man, was searching for 3 hrs how to extract javascript data w/o complicated rendering and your vid gave a 3 second solution. thank you so much man
John Great Video...Thanks for taking the time to do this!!!
Great tutorial. My screen scrapping job went from 4.5 hours to 8 minutes!!!!!
This video came into my feed just a couple days after I used exactly this method to collect some data from a website. Very good info! This is much easier than web scraping. Unfortunately, in my case, the data I could get out of the API was incomplete, and each item in the response contained a URL to a page with the rest of the info I needed, so I had to write some code to fetch each of those pages and scrape the info I needed. But much easier than having to scrape the initial list, as well.
Thanks! I’m glad it helped in some way. I often find that a combination of many different methods is needed to get the end result
Nice video! Used a similar method to collect European Court of Human Rights case documents since there is no official API. Glad to see such methods gaining popularity online, it’s so useful!
Thank you so much - this is so insightful and educational. Really helped me understand so many things in so little time.
nice tips, it's always fun to poke around and look at what data webpages are using
Thank you for those videos. They're extremely helpful. Keep up the good work! 🙂
Glad you like them!
Thank you for such a wonderful videos. I learned a lot from you. BTW your video quality and background are always very beautiful.
thanks! it's nice of you to mention video quality and background, i do my best!
Thank you so much for this tutorial. It helped me a lot on my project. And i learn a lot of new things that i didnt know. Thank you!
Easy to understand and very neat & clean narration. Keep it up 🙂
Thanks a lot 😊
You have shown me the light. Thank you for stopping me from making more web scripts that load up web pages in browsers to click buttons.
I have tried this method, but sadly the site I am trying to scrape from returns "error": "invalid_client", "error_description": "The client credentials provided were invalid, the request is unauthorized." Am I out of luck?
I have used the inspect with network method but wasn't aware of the copy as url method, thanks for that tip will save me a lot of time!
It worked like charm! I really needed this. Thanks
Thank you! It provided a new way of thinking at the issue of collecting data. 🙏
Clear and helpful as usual. Thanks a lot!!
This video is really amazing I learned web scraping from your videos
thanks
Thank you, your videos has automated my job. All i need now is a AI cam of myself
Wow! I just found a gem of a channel! Love your content!
Thanks appreciated!!
Wow that ‘generate code’ feature is super useful. Thanks!
Tnx :) Went from 1 hour scraping with Selenium to 1 minute just getting the JSONs.
This is amazing! So many things I didn’t know.
Thanks a lot, very interesting video, i learned so many things that i didn't know. I will come back for sure!
This is seriously high level content right here
Quite interesting! Thank you so much for showing such nice tricks, gonna get familiar with Insomnia.
brillant, i start to do that and it's very effective. Good chanel and good job. Thank you John
This has saved me hours today.
Genuinely, thank you. 🙇♂️
That’s great, thank you for watching!
Thank you for making this awesome tutorial.
Lol, I hadn't thought of a possibility to get an api like that until now haha thanks a lot!
Thanks for this - and other - videos, John. Super helpful! Regarding the cookie expiring, can you suggest a way to use playwright to programmatically generate the cookie used on the API request? I am assuming that cookie isn’t the same as the cookie used for the request of the html but maybe that’s wrong?
John, you make scraping interesting and motivating simultaneously.
Good that I found your channel
P. S. I lost myself it at 0:10😂
Thanks John! You are a lifesaver sir!
Great information Sir, really helpful.
Super well taught! Thank you Sir!
This is very useful trick John. 💖
This has been a great help for a beginner like me.. Thanks a lot..
This is amazing and will help me alot. Thank you!!
Nice tutorial on scrapping, some tricks I have been using myself, and some others never heard of until now thx for sharing!!!
Small adjustments if I may (please don't take this as criticism) I think you don't need to loop over each product to copy it to your res, you can use extend instead, also I think the header didn't change so you can take it out the loop over pages
Huge thanks this video was a game changer for me!
Greetings from Brazil! Thank you! I just had to adjust some of the quote marks on the header (there were some 'chained' double quotes (like ""windows"")), making some of the header's strings be interpreted by python as code, not text. Just had to change inner double quotes for single quotes (e.g. "'windows'") and it worked perfectly!). Can't wait to try your other tutorials! Once more, thank your very much!
Hey! Thank you! I’m glad you managed to get it work
Very nice... I did earlier the same on an another site, that was bit tricky. This is very straight forwarded site. Meanwhile , Insomnia reduced more works ;). Thanks for an another great video
That’s great. Yes I picked this on because it was very easy, I think it helps people understand the core concepts better
Hi John, I loved the video so much that I had to join Patreon to subscribe there to you. Thanks!
Hey! Thank you very much!
Nice video -- perfect level of detail.
Thanks
This is gold man, thank you!
Just WOW.
Thanks!
Hi John. Amazing content as always. Do you think I can skip learning scrappy for now? Can I do most of the scraping tasks just by using BS and request html?
Sure you can. If it works for you then carry on!
Great content. Thanks for this video
thank you for the information that you have explained, this is very helpful for the research I am doing
thank you so much
John, a specific video about how to scrape React website would be nice. It uses a mix of html and JSON data on pages...just an idea. Keep up the good work loving it.
Thanks for the video. Not new to python, pandas, or API's, but do need to start scraping pages for the API as some are not published, or not documented well. Thanks.
Thank you, this is so inspiring.
Thanks so much. This makes things a lot easier
Great to hear!
Wow, I think this 1 tutorial will do the most to up level my scraping than I could have ever imagined. Bye bye selenium.
Amazing ♥️, super helpful
You made/make stepping into scraping and developing, easy and fun .
Thanks for sharing !!
Thank you!
Instead of looping over the list and doing an append of each individual item, you can do list().extend(list()) which extends the list with the new list. The result of this is 1 list of dictionaries (basically an identical result to how you did it) but with less and cleaner code.
excellent video. Subscribed!
My master in data extraction.
Thanks a lot John!!!
Such a nice vid, if there was a VPN add, I didn't even notice it!
Hello John! I have a quick question then - this will give you much more information then you might need if you don't change anything in Insomnia, will deleting the headers remove some of this? Or is it more likely to change based on what you put in the "for p in data " section? i.e. if I include keys that are further indented, will I just receive those keys as oppose to everything nested in an undented key?
I found this extremely useful though and have been using it since!
This was nearly exactly my job back in 2014/2015 for a giant e-com shoe company. Was always nice when you'd come across a brand that included their inventory count in their API.
But yes selenium/watir all day lol
I’m often quite surprised how much info you can easily get!
wonderful. thank you.
John thank you for the videos.
How do you deal when in the network tab xhr you have a graphql object not a Json one?
amazing one ,thaks a lot
Best! Thank you!
As always - great content - well presented. I tried out your code today and unfortunately scrapy, although it outputs a data dump of the source page, it's unable to download the json data. I tried a bunch of settings changes to no avail. Any quick suggestions to where to look for a fix? Thanks in advance for you time.
Nice tutorial. But one important thing you haven't mentioned is that most of such APIs usually have some sort of authorization (based on headers, referrer, token, key, whitelist, etc.).
Yeah for sure I work for MS and all apis are white listed. Wish I could access it as a Public User lol
Some do. Most 'public' ones (eg. no account needed) will not. Even then figuring out what they do for auth is often trivial
That's true, another thing is that some sites have preventative mechanisms to bot accessing their site. Something as easy as ip rotation probably would do.
@@asyaryraihan They're not going to IP block someone for shopping on their site too much lmao
@@maskettaman1488 I’ve been IP blocked for web scraping before. Then again, I didn’t purchase anything. I was taking the photos lol
Awesome! Thank you! 🙂
Amazing content and tricks
Great job! Thank you very much!
Thanks!
You are the best sir!