Beginners Guide To Web Scraping with Python - All You Need To Know

Поделиться
HTML-код
  • Опубликовано: 31 май 2024
  • The web is full of data. Lots and lots of data. Data prime for scraping. But manually going to a website and copying and pasting the data into a spreadsheet or database is tedious and a time consuming. Enter web scraping! This guide will show you how to get started in scraping web data to your hearts content in 8 minutes!
    _____________________________
    📲🔗🔗📲 IMPORTANT LINKS 📲🔗🔗📲
    _____________________________
    • 💻PROJECT PAGE💻 - github.com/gigafide/basic_pyt...
    • Python 3 - www.python.org/downloads/
    • BeautifulSoup - www.crummy.com/software/Beaut...
    • Scraper Testing Website - quotes.toscrape.com/
    • Thonny - thonny.org/
    _____________________________
    📢📢📢📢 Follow 📢📢📢📢
    ____________________________
    redd.it/5o3tp8
    / tinkernut_ftw
    / tinkernut
    / tinkernut
    00:00 Introduction
    00:42 Setup
    01:16 Background
    02:23 Legality Concerns
    02:51 Writing The Code
    06:47 Conclusion

Комментарии • 167

  • @michaelmagill5466
    @michaelmagill5466 2 года назад +93

    This editing is fantastic, the explanations are clear and concise and completely without obfuscation. You, sir, are a gentleman.

    • @chanson8508
      @chanson8508 3 месяца назад

      Big faxxx! so many nonsense intro to scraping vids, but not this one : ))

    • @Greshma123
      @Greshma123 Месяц назад

      I’m sorry 😢 I’m not going

    • @SonicFusedWith_Goku
      @SonicFusedWith_Goku 21 день назад

      Bro this is crazy

    • @SonicFusedWith_Goku
      @SonicFusedWith_Goku 21 день назад

      I was trying to make a code to get stuff from my math homework website

  • @Sivarajansam931
    @Sivarajansam931 2 года назад +57

    When world needed him the most, He returned.

  • @JccChanco
    @JccChanco 20 часов назад

    So far in my life, this has been the smoothest learning process I have ever experienced. Thank you kind sir!

  • @JoaquinRoibal
    @JoaquinRoibal Год назад +23

    Great introduction. Clear, concise and covered related topics without being distracting. I look forward to your other videos on Python.

  • @benjaminblack8653
    @benjaminblack8653 2 года назад +7

    So glad to see you posting again! I missed your videos so much. I believe my first video of yours was either How to Setup a Webserver or How to Make an Operating System. Both excellent videos!

  • @sauceboss38
    @sauceboss38 Год назад +11

    This is exactly what I was looking for. Very concise and helpful, thank you!

  • @lemonbread378
    @lemonbread378 Год назад +5

    currently planning for my computer science A level project and wanted to learn what this web scraping thingamejiggy was all about
    this video was an amazing introduction! simple, clear, but not over proffessional
    didn't leave me feeling overwhelmed, and i'm going to watch more of your tuts now, cheers mate!

  • @algj
    @algj 2 года назад +2

    This is crazy to see your videos again being recommended :o
    it has been years since I saw your last video!

  • @Squid666
    @Squid666 2 месяца назад

    I always end up back here when I need a refresher on scraping ❤ thank you!

  • @kedrovasuma2857
    @kedrovasuma2857 2 года назад +15

    This smart man is still alive

    • @ten132
      @ten132 2 года назад

      I was abput to comment the same lmao.

  • @colinbrown6629
    @colinbrown6629 11 дней назад

    Amazing video to get you started with scraping, thanks!

  • @wrzq
    @wrzq 4 месяца назад

    Beautiful tutorial, exactly what I've been looking for. Thanks a lot, Man!

  • @webslinger2011
    @webslinger2011 2 года назад +24

    Your technological code geniusness shall be added to my own. Seriously looking for this. Thanks!

  • @Geeksmithing
    @Geeksmithing 2 года назад +1

    Hey man, this is great!! Happy to another video from ya!

  • @dugumayeshitla3909
    @dugumayeshitla3909 10 месяцев назад

    One of my favorite channels for learning ... you rock

  • @proxyscrape
    @proxyscrape Год назад +1

    I love that you used a Raspberry Pi in this tutorial. It's amazing to mess around on and do little experiments.

  • @TheJoyOfGaming
    @TheJoyOfGaming 2 года назад +5

    haha awesome man. I don't even do coding but couldn't resist following along just to try it! Cheers!

  • @YeshuaIsTheTruth
    @YeshuaIsTheTruth Год назад

    These are the kinds of programming videos we need!

  • @mrklean0292
    @mrklean0292 Месяц назад

    Man... I've seen other web scraping tutorials and they take you ten miles down the road and through all types of advanced garbage at you. Granted, I know what you have shown here is the quick and easy way, but that's all I have wanted to get an understanding of, what it is, and how it basically works. Thank you.

  • @htstube1
    @htstube1 Год назад +1

    great video! seems very straight forward and easy to follow. I will be trying it out in the next day or two

  • @goodbook6865
    @goodbook6865 Год назад

    Awesome video! Short and to the point. Thank you!

  • @santiagoSosaH
    @santiagoSosaH 2 года назад

    wooooow it's been years that i didn't see a video about tinkernut. i think about 10 years ago i learned sql and php with your tutorial about making a webpage with users passwords etc.
    man so nice to see a video of you.

  • @HayCorvus
    @HayCorvus 2 месяца назад

    I grew up in the early youtube days. I was a enamored by the computers knowledge that I could only get from channels like Tinkernut. There really was no schools that offered nuanced coding/web lessons when I was growing up. It wasn't until I went to college and got my degree in Computer Science that I'd be able to build a foundation in computational theory and all sorts of other fun subjects related to computers.
    Thanks for helping me along the way to that journey, Tinker!

  • @Syndesi
    @Syndesi 2 года назад +13

    cool tutorial :D
    for more complicated data I use xpath, although its syntax is a bit weird at first.
    furthermore: validate, validate and validate your data. you do not want a program which crashes randomly, only because a value is missing, empty or malformed :)

  • @lundebc
    @lundebc 2 года назад +1

    Thanks for this tutorial, Looking forward to the next part.

  • @gamerguy9533
    @gamerguy9533 2 месяца назад

    Thanks! Super basic but it was what I needed to make my code start working!

  • @teomanefe
    @teomanefe 2 года назад +5

    I actually needed this!

  • @Code___Play
    @Code___Play 3 месяца назад

    Very practical and helpful video with very detailed explanation!

  • @thecryptocheckpoint5083
    @thecryptocheckpoint5083 2 года назад

    Wow really great production . Lots of history and info

  • @pulp6667
    @pulp6667 2 года назад

    Thank you for this video I created another scraper for eth, it's rough but it's my first and I am so happy

  • @donsurlylyte
    @donsurlylyte 2 года назад +1

    dude, that intro proves you have a bright future in infomercials!

  • @bng3832
    @bng3832 2 года назад +1

    I swear to god you are the best!
    I know see why youtube dont recommend great videos. Its because youtube dont want people to study tech!!

  • @craftedpixel
    @craftedpixel 2 года назад +2

    The legend is back!

  • @deepvoyager01
    @deepvoyager01 4 месяца назад

    Thank you for the video
    it helped me to understand how scrapper works

  • @NasimKhan-tk3ij
    @NasimKhan-tk3ij 10 месяцев назад

    Overall, I highly recommend this video to anyone who is interested in learning Python. It is a comprehensive and informative resource that will teach you need to know to get started with this powerful programming language.

  • @desecrated.eviscerated
    @desecrated.eviscerated 7 месяцев назад +3

    if you get an error, try replacing the line of code: file = open('scrapped_quotes.csv', 'w', encoding='utf-8', newline='')

  • @liamhughes7093
    @liamhughes7093 11 месяцев назад

    Great video. With the phrase "web scraper", I can't help but picture a function that returns a digital box chevy with candy paint, 26" chrome rims, tinted windows, and triple 15" subs in the trunk with some Too $hort going. I hope someone else from Northern California is thinking the same thing, and cracks up seeing this.
    But thank you for your fantastic educational video! cheers.

  • @InspiredInsights4U
    @InspiredInsights4U 2 года назад +4

    A survey businessman could use web scraping to scrape a competitors website for product pricing to include product numbers photos prices and then use this to monitor their price changes and or adjust their own prices on their website to stay just a slight bit more competitive

  • @mudasir2168
    @mudasir2168 Год назад

    Awesome stuff.....much appreciated!

  • @jackschwabe4929
    @jackschwabe4929 8 месяцев назад

    great video. very easy to impliment and understand

  • @arjunaudupi7956
    @arjunaudupi7956 2 года назад +4

    @tinkernut you are the reason for me being a software developer..
    Thanks dude. Keep up the good work..

  • @Corkyjett
    @Corkyjett 2 года назад

    this tutorial was great!! thank you!

  • @myriadtechrepair1191
    @myriadtechrepair1191 2 года назад +6

    Our lord has returned.

  • @KowboyUSA
    @KowboyUSA 2 года назад +2

    Just the inexpensive project I needed.

  • @harrystone7954
    @harrystone7954 2 года назад

    very logical and understandable explanation

  • @RodWorldTours-fo6mh
    @RodWorldTours-fo6mh 5 месяцев назад

    Most well earned subscriber ever

  • @Raxer_th
    @Raxer_th 2 года назад +8

    This channel used to have like 100k views. Now its down to just less than 10k. Idk why. When I was around 13, I wanted to make an fps game and found his video to be very interesting. I follow this channel since then. Tinkernut was the reason I started learning programming. After watching his HTML tutorial (create a website from scratch). Even though I neither have com-sci degree nor working as a programmer, I'm still learning python during my freetime. Thank you Daniel.

    • @toniphillips9269
      @toniphillips9269 2 года назад

      Yeah poops yeah lol iaooapaoopp lol oowss d’s aIA

  • @kenjohnsiosan9707
    @kenjohnsiosan9707 Год назад

    it's a coincidence that I have a task to scrape data and format it to CSV then send it to email. thank you for this tutorial, sir.

  • @Warkeds
    @Warkeds 2 года назад

    This channel is awesome!!

  • @CareerHubSpot
    @CareerHubSpot Год назад

    Concise and precise

  • @KontrolStyle
    @KontrolStyle Год назад +1

    well explained, ty

  • @lucasn0tch
    @lucasn0tch 2 года назад +3

    Long time no see.
    This may be useful for tracking stock for a PS5/Xbox/Switch/GPU in these times.

    • @JoaoPedro-ki7ct
      @JoaoPedro-ki7ct 2 года назад

      Even a Switch is being scalped?
      I heard about PS5, Xbox Series X|S, GPUs but not about the Switch itself.

  • @fearlessAx
    @fearlessAx Год назад +3

    Hey, I'm getting "NameError: name 'page_to_scrape' is not defined"

  • @sagarnewpane8549
    @sagarnewpane8549 2 года назад +4

    I need more content on Rasberry PICO !!

  • @user-vz7ff8ps8k
    @user-vz7ff8ps8k 6 месяцев назад

    Thanks a lot for this clear video! How would I retrieve more information associated with the quote? For instance I would like to receive and print both the author and the associated tags.

  • @NitishKumarIndia
    @NitishKumarIndia 9 месяцев назад

    I love this man

  • @JayD-jn9or
    @JayD-jn9or Месяц назад

    Thanks for the vid! After a VERY VERY long time i'm getting back into casual coding and looking to casually make some scraping info programs for games with the option to select which info the person wants to see.
    So if the site allows scraping would it be better to have my app in progress be independant, have checks done once a minute or every dive minutes? Or have the info scraped, processed and posted on a site i create and retrieved for ppl using the the app? That is if i start shareing the app. My concern is annoying the site owners by checking too often, forgive me if its a silly question, i'm not experiance with scraping.

  • @mmuneebahmed
    @mmuneebahmed 2 года назад +2

    Thanks for sharing the expertise! However, I get the following error when running the code.
    writer.writerow([quote.text, author.text])
    UnicodeEncodeError: 'latin-1' codec can't encode character '\u201c' in position 0: ordinal not in range(256)

  • @slankk
    @slankk 2 года назад

    What a great video

  • @AirmanKolberg
    @AirmanKolberg 2 года назад +8

    Web scraping is to copying and pasting manually, as copying and pasting manually is to using your eyeballs, memorising, then typing it into a file. There is no difference between surfing the web and web scraping. One is just faster. Like how copy/pasting something from Wikipedia is faster than reading and re-writing it.

    • @jalanmcrae
      @jalanmcrae 10 месяцев назад

      Yes, automation is a huge time saver 👍🏾

  • @DrDre001
    @DrDre001 2 года назад

    Nice! I need to learn puthon

  • @ahoj113
    @ahoj113 2 года назад +1

    Cool!

  • @nikitadorosh244
    @nikitadorosh244 2 месяца назад

    Nice stuff, X.

    • @RobloxPrompt
      @RobloxPrompt 2 месяца назад

      Yeah, I thought it was very nice too. For me I use visual studio and I found it to be very helpful since I was able to use python and install the pips for python via command prompt then use visual studio code. Though what my primary application would be for finding different sites from a website. Would be interesting for finding src's and href's. Nice name btw. I like the commonality of it.

  • @codingmaster24
    @codingmaster24 2 года назад +1

    Best yotuber.

  • @Autoscraping
    @Autoscraping 4 месяца назад

    An extraordinary piece of video material that has proven highly useful for our new team members. Your generosity is immensely appreciated!

  • @SarahGamigbigboss
    @SarahGamigbigboss 10 месяцев назад +2

    Funny how it's titled Beginners Guide to Scraping and once he's done with the introduction and starts typing a bunch of codes that " beginners" have absolutely no clue how to do... Thanks, man great help!

  • @OtherDalfite
    @OtherDalfite 2 года назад +2

    Halloween intro? At the end of November? This videos been a while in the making huh?😂

  • @RigzoTV
    @RigzoTV 2 года назад +2

    Need more advance lessons on scraping.

  • @Mcmiddies
    @Mcmiddies 2 года назад

    Hey Tinkernut. Welcome back to my feed.

  • @InvinsableNoob
    @InvinsableNoob 2 года назад

    The avatar has returned 🙌

  • @serhiyranush4420
    @serhiyranush4420 2 года назад +1

    Great explanation. Simple and up to the point. Had to look up, though, what the zip function did, but, I guess, it's even better that I had to find it out on my own.
    However, the quotation marks are not saved right in csv file, instead, they show as 3 weird characters. They do display correctly in Thonny, though.
    Also, the authors are not put into a separate column, but in the same one with the quote.
    Also, the quote with a semicolon in it got broken at this semicolon in two parts, and the second part was placed into a separate column.
    Also, in the csv file open I had to put encoding = "utf-8" after the "w", because I was getting an encoding error. Could this somehow be causing the about problems?

    • @kaiperdaens7670
      @kaiperdaens7670 5 месяцев назад +1

      same problems here(except the third), I am happy that it isn't just me but I dont know how to fix them bc I am new to this.

  • @DTMPro
    @DTMPro 2 года назад +13

    Where can we find out if we are allowed to scrape data from a specific website so that eventually we don't end up in trouble?
    Does scraping code/process works the same way for scraping product prices, e.g. trying to replicate camel for amazon or that takes additional authorization from amazon?

    • @Tinkernut
      @Tinkernut  2 года назад +13

      Excellent question! All popular websites have a scraping/crawling text file called "robots.txt". This tells what can and can't be scraped from a website. Here is an example of Amazon's robots.txt file (spoiler, you can't scrape much) www.amazon.com/robots.txt

    • @jimavictor6022
      @jimavictor6022 2 года назад +1

      @@Tinkernut what about those non popular websites with no robot.txt file

    • @JoaoPedro-ki7ct
      @JoaoPedro-ki7ct 2 года назад +2

      @@jimavictor6022 As long as you don't scrape things like other people's documents from governamental sites or usernames plus passwords you should be fine with the rest.
      What website owners are really worried about are their website availability (whether they are online or offline) and bandwidth usage as they pay X for X amount of gigabytes consumed. (they pay for each gigabyte they send and receive from users)
      So as long as you don't consciously/unconsciously take down their site you're fine.

    • @JoaoPedro-ki7ct
      @JoaoPedro-ki7ct 2 года назад +3

      @@jimavictor6022 On top of that they have their automated way to detect bots, the worst that can happen is getting your IP "banned" or simply restricted from viewing their webpages, that will happen way, way, way... before you getting sued by them.

    • @jimavictor6022
      @jimavictor6022 2 года назад +2

      @@JoaoPedro-ki7ct I really appreciate the reply. Thank you..

  • @nikro7239
    @nikro7239 3 месяца назад

    when I write to csv file for some reason there is always one free row (with literally nothing) between the actual rows with data

  • @dillkhalifa
    @dillkhalifa 4 месяца назад

    you owe me bro. i just subscribed to your channel😂😂

  • @DroidEagle
    @DroidEagle 2 года назад +2

    dude where were u?

  • @reghawkins73
    @reghawkins73 Год назад +1

    I had to add encoding to the line--- file = open("scraped_quotes.csv", "w", encoding='utf-8')

  • @ArqitectTV
    @ArqitectTV Год назад +1

    What if the data you are searching for is obtainable but is on separate pages within a given site.

  • @elisabeth9626
    @elisabeth9626 Год назад

    Dankeschön ❤

  • @jenschristiannrgaard4878
    @jenschristiannrgaard4878 5 месяцев назад

    how much more difficult is it if I want all sub-pages where you would normally find more information?

  • @HayaBaqir
    @HayaBaqir 5 месяцев назад

    What are the pips we need to install?

  • @flobbie87
    @flobbie87 2 года назад

    Last time i did something like that i used a line mode browser to flatten the webpage.

  • @vik237
    @vik237 2 года назад

    what raspiberry pi you use?

  • @hussainmahady5295
    @hussainmahady5295 2 года назад +1

    Awesome 🔥 bro. Can you make a tutorial about tunnelling and vpns

    • @Tinkernut
      @Tinkernut  2 года назад

      Sure can! I made them both a few years ago ;-) Just search my channel

  • @kyrianrahimatulla1561
    @kyrianrahimatulla1561 2 года назад

    I had no clue it was this easy, but how do I find out which websites I'm not allowed to scrape? All I get from Google is ways to prevent scraping on my own website (which I don't have, but that's beyond the point).

  • @DarthJeep
    @DarthJeep 2 года назад

    Davy504 fan? "Scrape it..." Just kinda reminded me of the ol' "SLAP IT!" line. lol

  • @santoshpandey23
    @santoshpandey23 3 месяца назад

    Thanks, this was very good, can you share any link where you have done the same for teh website which require username and password, can you please share the same, thanks a ton

  • @angeloj.willems4362
    @angeloj.willems4362 2 года назад

    Cool goggles, where can I get a pair?

  • @ejonesss
    @ejonesss 2 года назад

    how can a web site ban scraping since once the data is downloaded it is open for the taking?
    unless the scraping script acts as a browser and they can figure out based on user agents or lack there of.
    in witch can you be able to intercept the data from the html source from the browser so it is as if you saved the page as an html file and ran it through the script then refreshed the page and repeat?

    • @LiEnby
      @LiEnby 2 года назад

      technically speaking, there is basically no way to stop it, besides maybe recaptcha, but even then you can simply just have a human do the captcha

    • @pakistaniraveasylum1396
      @pakistaniraveasylum1396 2 года назад

      Law

    • @LiEnby
      @LiEnby 2 года назад

      @@pakistaniraveasylum1396 it's never even been tried in a court tbh

    • @linuxramblingproductions8554
      @linuxramblingproductions8554 Год назад

      @@pakistaniraveasylum1396 thats like trying to make inspect element illegal it just doesn’t work

    • @pakistaniraveasylum1396
      @pakistaniraveasylum1396 Год назад

      @@linuxramblingproductions8554 yea the law and bureaucracy in general is retarded

  • @ura9390
    @ura9390 Год назад

    Can you do one for people who never used code?

  • @mrmxyzptlk8175
    @mrmxyzptlk8175 Год назад +2

    Error: "No module named bs4"

    • @recursion.
      @recursion. 9 месяцев назад +1

      Facing the same, were you able to fix it?

  • @lolkek6807
    @lolkek6807 3 месяца назад

    what if I want just the first quote?not all

  • @Pixilmb12
    @Pixilmb12 6 месяцев назад

    I use IDLE, but for soup reason in the 'soup.findAll' function it says 'nameerror - name 'soup' not defined' :(

    • @Pixilmb12
      @Pixilmb12 6 месяцев назад

      Fixed 🤦‍♂

  • @royalhermit
    @royalhermit 2 года назад +1

    What is line 10 "w"? I am getting NameError: name 'scraped_quotes' is not defined

    • @ashrude1071
      @ashrude1071 2 года назад +1

      You probably have a typo

    • @Tinkernut
      @Tinkernut  2 года назад +2

      Running it with my code from github works fine github.com/gigafide/basic_python_scraping/blob/main/basic_scrape_csv_export.py

  • @martinrages
    @martinrages 2 года назад +1

    Can websites detect scraping? If so, how do i escape the dutch AIVD

    • @JoaoPedro-ki7ct
      @JoaoPedro-ki7ct 2 года назад

      Yes, they have their ways to detect automated requests, but what they do when they detect "bots" is up to each website.

    • @LiEnby
      @LiEnby 2 года назад +1

      yes and no, you can check for things like user agent string or try run javascript or something like that, however its actually a really hard problem to solve because a scraping script can look indistinguishable from a browser ..

  • @jackrider798
    @jackrider798 2 года назад +1

    Love your videos, I don’t understand much of the content, but what’s the difference between taking these quotes via code and just copy pasting into a excel sheet? I’m a noob sorry

    • @JoaoPedro-ki7ct
      @JoaoPedro-ki7ct 2 года назад +1

      You can do it automatically every X amount of time.
      You can use a "bot" to do something with that data you scraped.
      I don't use Excel, but if you're talking about what I am thinking, Excel is doing exactly what was talked on this video; web scraping.
      The thing is that Excel is doing it for you without the need of you programing it first, but that web scraping it does is very, very limited to what tools made for scraping can do.

    • @Ryan1456100
      @Ryan1456100 2 года назад +2

      In practice? Nothing is different, you get the same result. However, let's say you have a website with 2000 quotes and you need to keep a sheet up to date. That's where a scraper would be useful, as its time you really only need to spend once, plus, at that kind of scale it would be faster to write the code than do it manually.

    • @jackrider798
      @jackrider798 2 года назад +1

      @@JoaoPedro-ki7ct thank you!

  • @RENO_K
    @RENO_K 2 месяца назад

    I'm only giving a good comments bc my gf told me too,
    Good video👍

  • @durrium
    @durrium 2 года назад

    What do i do if the page gives 404 ???

  • @jpsl5281
    @jpsl5281 Год назад +1

    its not working with opentable

  • @user-wn3eq6tq7o
    @user-wn3eq6tq7o Год назад

    The really dry jokes are surprisingly pleasant.. who could scrape the web without a web? What do you think all the spiders think about that?

  • @NexusGuru
    @NexusGuru 7 месяцев назад

    is it really this simple?