Beginners Guide To Web Scraping with Python - All You Need To Know
HTML-код
- Опубликовано: 31 май 2024
- The web is full of data. Lots and lots of data. Data prime for scraping. But manually going to a website and copying and pasting the data into a spreadsheet or database is tedious and a time consuming. Enter web scraping! This guide will show you how to get started in scraping web data to your hearts content in 8 minutes!
_____________________________
📲🔗🔗📲 IMPORTANT LINKS 📲🔗🔗📲
_____________________________
• 💻PROJECT PAGE💻 - github.com/gigafide/basic_pyt...
• Python 3 - www.python.org/downloads/
• BeautifulSoup - www.crummy.com/software/Beaut...
• Scraper Testing Website - quotes.toscrape.com/
• Thonny - thonny.org/
_____________________________
📢📢📢📢 Follow 📢📢📢📢
____________________________
redd.it/5o3tp8
/ tinkernut_ftw
/ tinkernut
/ tinkernut
00:00 Introduction
00:42 Setup
01:16 Background
02:23 Legality Concerns
02:51 Writing The Code
06:47 Conclusion
This editing is fantastic, the explanations are clear and concise and completely without obfuscation. You, sir, are a gentleman.
Big faxxx! so many nonsense intro to scraping vids, but not this one : ))
I’m sorry 😢 I’m not going
Bro this is crazy
I was trying to make a code to get stuff from my math homework website
When world needed him the most, He returned.
So far in my life, this has been the smoothest learning process I have ever experienced. Thank you kind sir!
Great introduction. Clear, concise and covered related topics without being distracting. I look forward to your other videos on Python.
So glad to see you posting again! I missed your videos so much. I believe my first video of yours was either How to Setup a Webserver or How to Make an Operating System. Both excellent videos!
This is exactly what I was looking for. Very concise and helpful, thank you!
currently planning for my computer science A level project and wanted to learn what this web scraping thingamejiggy was all about
this video was an amazing introduction! simple, clear, but not over proffessional
didn't leave me feeling overwhelmed, and i'm going to watch more of your tuts now, cheers mate!
This is crazy to see your videos again being recommended :o
it has been years since I saw your last video!
I always end up back here when I need a refresher on scraping ❤ thank you!
This smart man is still alive
I was abput to comment the same lmao.
Amazing video to get you started with scraping, thanks!
Beautiful tutorial, exactly what I've been looking for. Thanks a lot, Man!
Your technological code geniusness shall be added to my own. Seriously looking for this. Thanks!
Love the Borg reference XDD
Hey man, this is great!! Happy to another video from ya!
One of my favorite channels for learning ... you rock
I love that you used a Raspberry Pi in this tutorial. It's amazing to mess around on and do little experiments.
haha awesome man. I don't even do coding but couldn't resist following along just to try it! Cheers!
These are the kinds of programming videos we need!
Man... I've seen other web scraping tutorials and they take you ten miles down the road and through all types of advanced garbage at you. Granted, I know what you have shown here is the quick and easy way, but that's all I have wanted to get an understanding of, what it is, and how it basically works. Thank you.
great video! seems very straight forward and easy to follow. I will be trying it out in the next day or two
Awesome video! Short and to the point. Thank you!
wooooow it's been years that i didn't see a video about tinkernut. i think about 10 years ago i learned sql and php with your tutorial about making a webpage with users passwords etc.
man so nice to see a video of you.
I grew up in the early youtube days. I was a enamored by the computers knowledge that I could only get from channels like Tinkernut. There really was no schools that offered nuanced coding/web lessons when I was growing up. It wasn't until I went to college and got my degree in Computer Science that I'd be able to build a foundation in computational theory and all sorts of other fun subjects related to computers.
Thanks for helping me along the way to that journey, Tinker!
cool tutorial :D
for more complicated data I use xpath, although its syntax is a bit weird at first.
furthermore: validate, validate and validate your data. you do not want a program which crashes randomly, only because a value is missing, empty or malformed :)
Thanks for this tutorial, Looking forward to the next part.
Thanks! Super basic but it was what I needed to make my code start working!
I actually needed this!
Very practical and helpful video with very detailed explanation!
Wow really great production . Lots of history and info
Thank you for this video I created another scraper for eth, it's rough but it's my first and I am so happy
dude, that intro proves you have a bright future in infomercials!
I swear to god you are the best!
I know see why youtube dont recommend great videos. Its because youtube dont want people to study tech!!
The legend is back!
Thank you for the video
it helped me to understand how scrapper works
Overall, I highly recommend this video to anyone who is interested in learning Python. It is a comprehensive and informative resource that will teach you need to know to get started with this powerful programming language.
if you get an error, try replacing the line of code: file = open('scrapped_quotes.csv', 'w', encoding='utf-8', newline='')
Great video. With the phrase "web scraper", I can't help but picture a function that returns a digital box chevy with candy paint, 26" chrome rims, tinted windows, and triple 15" subs in the trunk with some Too $hort going. I hope someone else from Northern California is thinking the same thing, and cracks up seeing this.
But thank you for your fantastic educational video! cheers.
A survey businessman could use web scraping to scrape a competitors website for product pricing to include product numbers photos prices and then use this to monitor their price changes and or adjust their own prices on their website to stay just a slight bit more competitive
Awesome stuff.....much appreciated!
great video. very easy to impliment and understand
@tinkernut you are the reason for me being a software developer..
Thanks dude. Keep up the good work..
this tutorial was great!! thank you!
Our lord has returned.
Just the inexpensive project I needed.
very logical and understandable explanation
Most well earned subscriber ever
This channel used to have like 100k views. Now its down to just less than 10k. Idk why. When I was around 13, I wanted to make an fps game and found his video to be very interesting. I follow this channel since then. Tinkernut was the reason I started learning programming. After watching his HTML tutorial (create a website from scratch). Even though I neither have com-sci degree nor working as a programmer, I'm still learning python during my freetime. Thank you Daniel.
Yeah poops yeah lol iaooapaoopp lol oowss d’s aIA
it's a coincidence that I have a task to scrape data and format it to CSV then send it to email. thank you for this tutorial, sir.
This channel is awesome!!
Concise and precise
well explained, ty
Long time no see.
This may be useful for tracking stock for a PS5/Xbox/Switch/GPU in these times.
Even a Switch is being scalped?
I heard about PS5, Xbox Series X|S, GPUs but not about the Switch itself.
Hey, I'm getting "NameError: name 'page_to_scrape' is not defined"
I need more content on Rasberry PICO !!
Thanks a lot for this clear video! How would I retrieve more information associated with the quote? For instance I would like to receive and print both the author and the associated tags.
I love this man
Thanks for the vid! After a VERY VERY long time i'm getting back into casual coding and looking to casually make some scraping info programs for games with the option to select which info the person wants to see.
So if the site allows scraping would it be better to have my app in progress be independant, have checks done once a minute or every dive minutes? Or have the info scraped, processed and posted on a site i create and retrieved for ppl using the the app? That is if i start shareing the app. My concern is annoying the site owners by checking too often, forgive me if its a silly question, i'm not experiance with scraping.
Thanks for sharing the expertise! However, I get the following error when running the code.
writer.writerow([quote.text, author.text])
UnicodeEncodeError: 'latin-1' codec can't encode character '\u201c' in position 0: ordinal not in range(256)
What a great video
Web scraping is to copying and pasting manually, as copying and pasting manually is to using your eyeballs, memorising, then typing it into a file. There is no difference between surfing the web and web scraping. One is just faster. Like how copy/pasting something from Wikipedia is faster than reading and re-writing it.
Yes, automation is a huge time saver 👍🏾
Nice! I need to learn puthon
Cool!
Nice stuff, X.
Yeah, I thought it was very nice too. For me I use visual studio and I found it to be very helpful since I was able to use python and install the pips for python via command prompt then use visual studio code. Though what my primary application would be for finding different sites from a website. Would be interesting for finding src's and href's. Nice name btw. I like the commonality of it.
Best yotuber.
An extraordinary piece of video material that has proven highly useful for our new team members. Your generosity is immensely appreciated!
Funny how it's titled Beginners Guide to Scraping and once he's done with the introduction and starts typing a bunch of codes that " beginners" have absolutely no clue how to do... Thanks, man great help!
Halloween intro? At the end of November? This videos been a while in the making huh?😂
Need more advance lessons on scraping.
Hey Tinkernut. Welcome back to my feed.
The avatar has returned 🙌
Great explanation. Simple and up to the point. Had to look up, though, what the zip function did, but, I guess, it's even better that I had to find it out on my own.
However, the quotation marks are not saved right in csv file, instead, they show as 3 weird characters. They do display correctly in Thonny, though.
Also, the authors are not put into a separate column, but in the same one with the quote.
Also, the quote with a semicolon in it got broken at this semicolon in two parts, and the second part was placed into a separate column.
Also, in the csv file open I had to put encoding = "utf-8" after the "w", because I was getting an encoding error. Could this somehow be causing the about problems?
same problems here(except the third), I am happy that it isn't just me but I dont know how to fix them bc I am new to this.
Where can we find out if we are allowed to scrape data from a specific website so that eventually we don't end up in trouble?
Does scraping code/process works the same way for scraping product prices, e.g. trying to replicate camel for amazon or that takes additional authorization from amazon?
Excellent question! All popular websites have a scraping/crawling text file called "robots.txt". This tells what can and can't be scraped from a website. Here is an example of Amazon's robots.txt file (spoiler, you can't scrape much) www.amazon.com/robots.txt
@@Tinkernut what about those non popular websites with no robot.txt file
@@jimavictor6022 As long as you don't scrape things like other people's documents from governamental sites or usernames plus passwords you should be fine with the rest.
What website owners are really worried about are their website availability (whether they are online or offline) and bandwidth usage as they pay X for X amount of gigabytes consumed. (they pay for each gigabyte they send and receive from users)
So as long as you don't consciously/unconsciously take down their site you're fine.
@@jimavictor6022 On top of that they have their automated way to detect bots, the worst that can happen is getting your IP "banned" or simply restricted from viewing their webpages, that will happen way, way, way... before you getting sued by them.
@@JoaoPedro-ki7ct I really appreciate the reply. Thank you..
when I write to csv file for some reason there is always one free row (with literally nothing) between the actual rows with data
you owe me bro. i just subscribed to your channel😂😂
dude where were u?
I had to add encoding to the line--- file = open("scraped_quotes.csv", "w", encoding='utf-8')
What if the data you are searching for is obtainable but is on separate pages within a given site.
Dankeschön ❤
how much more difficult is it if I want all sub-pages where you would normally find more information?
What are the pips we need to install?
Last time i did something like that i used a line mode browser to flatten the webpage.
what raspiberry pi you use?
Awesome 🔥 bro. Can you make a tutorial about tunnelling and vpns
Sure can! I made them both a few years ago ;-) Just search my channel
I had no clue it was this easy, but how do I find out which websites I'm not allowed to scrape? All I get from Google is ways to prevent scraping on my own website (which I don't have, but that's beyond the point).
Davy504 fan? "Scrape it..." Just kinda reminded me of the ol' "SLAP IT!" line. lol
Thanks, this was very good, can you share any link where you have done the same for teh website which require username and password, can you please share the same, thanks a ton
Cool goggles, where can I get a pair?
how can a web site ban scraping since once the data is downloaded it is open for the taking?
unless the scraping script acts as a browser and they can figure out based on user agents or lack there of.
in witch can you be able to intercept the data from the html source from the browser so it is as if you saved the page as an html file and ran it through the script then refreshed the page and repeat?
technically speaking, there is basically no way to stop it, besides maybe recaptcha, but even then you can simply just have a human do the captcha
Law
@@pakistaniraveasylum1396 it's never even been tried in a court tbh
@@pakistaniraveasylum1396 thats like trying to make inspect element illegal it just doesn’t work
@@linuxramblingproductions8554 yea the law and bureaucracy in general is retarded
Can you do one for people who never used code?
Error: "No module named bs4"
Facing the same, were you able to fix it?
what if I want just the first quote?not all
I use IDLE, but for soup reason in the 'soup.findAll' function it says 'nameerror - name 'soup' not defined' :(
Fixed 🤦♂
What is line 10 "w"? I am getting NameError: name 'scraped_quotes' is not defined
You probably have a typo
Running it with my code from github works fine github.com/gigafide/basic_python_scraping/blob/main/basic_scrape_csv_export.py
Can websites detect scraping? If so, how do i escape the dutch AIVD
Yes, they have their ways to detect automated requests, but what they do when they detect "bots" is up to each website.
yes and no, you can check for things like user agent string or try run javascript or something like that, however its actually a really hard problem to solve because a scraping script can look indistinguishable from a browser ..
Love your videos, I don’t understand much of the content, but what’s the difference between taking these quotes via code and just copy pasting into a excel sheet? I’m a noob sorry
You can do it automatically every X amount of time.
You can use a "bot" to do something with that data you scraped.
I don't use Excel, but if you're talking about what I am thinking, Excel is doing exactly what was talked on this video; web scraping.
The thing is that Excel is doing it for you without the need of you programing it first, but that web scraping it does is very, very limited to what tools made for scraping can do.
In practice? Nothing is different, you get the same result. However, let's say you have a website with 2000 quotes and you need to keep a sheet up to date. That's where a scraper would be useful, as its time you really only need to spend once, plus, at that kind of scale it would be faster to write the code than do it manually.
@@JoaoPedro-ki7ct thank you!
I'm only giving a good comments bc my gf told me too,
Good video👍
What do i do if the page gives 404 ???
its not working with opentable
The really dry jokes are surprisingly pleasant.. who could scrape the web without a web? What do you think all the spiders think about that?
is it really this simple?