Beautiful Soup 4 Tutorial #1 - Web Scraping With Python

Поделиться
HTML-код
  • Опубликовано: 9 янв 2025

Комментарии • 329

  • @adnanpramudio6109
    @adnanpramudio6109 3 года назад +127

    I started learning python few months ago and chose web scraping as my specialization. Your selenium playlist is fascinating. Thanks Tim

    • @mihailmilenkov6223
      @mihailmilenkov6223 3 года назад +2

      Hey how did you progress?

    • @AliAhmed63708
      @AliAhmed63708 2 года назад +4

      r u currently freelancing webscraping ?

    • @alex59292
      @alex59292 2 года назад +3

      @@AliAhmed63708 i am

    • @hjvela1907
      @hjvela1907 Год назад +2

      @@alex59292 So where can I reach you for some webscraping freelancing.

    • @japhethmutuku8508
      @japhethmutuku8508 5 месяцев назад

      @@hjvela1907 hello do you still need a web scraping freelancer?

  • @Recklessness97
    @Recklessness97 Год назад +6

    Subscribed. The last 4 minutes of the video is exactly what I needed. The Soup tree structure part, specifically dissecting the price out of the HTML code. I could get the price on my own web scrap script but it also came with a bunch of other "junk" that was apart of the "tree". Thanks for pointing me in the right direction and explaining how it works!!!!!

  • @unpatel1
    @unpatel1 2 года назад +5

    I was puhsing learning web scraping for some time now and finally jumped in today and watched my first video on this topic. I like Tim's videos because they are simple and easy to underatsnd, so I decided to go with his video on this topc. Thank you.

  • @dariyababumalapati7144
    @dariyababumalapati7144 2 года назад +104

    The 'text' argument is changed into 'string' in Beautiful Soup 4.4.0.

    • @DetectiveConan990v3
      @DetectiveConan990v3 Год назад +2

      yes thank you

    • @IanWeingardt
      @IanWeingardt Год назад +2

      thank you so much, I was very lost when I got the "DepacrationWarning"

    • @parvpaigwar2925
      @parvpaigwar2925 7 месяцев назад +3

      @@IanWeingardt It appears that the content might be dynamically loaded by JavaScript in amazon website, which means it might not be present in the initial HTML response

  • @jimstand
    @jimstand 2 года назад +3

    SO I am writing some software to start a business. I am scraping 25 web pages. I hacked through the first 20. The last 5 were difficult so I tried using BS4 with this video. Using BS4 made the last 5 easier than any of the first 20. Thank you Tim!!

  • @tanmaypatel4152
    @tanmaypatel4152 3 года назад +81

    Man I was literally looking for a good tutorial on Bs4 and guess what Tim read my mind. Thank you very much Tim :)

    • @BB-si6cz
      @BB-si6cz 3 года назад +3

      And I started with web scraping like 2 days ago

    • @tanmaypatel4152
      @tanmaypatel4152 3 года назад +1

      @@BB-si6cz Oh that's cool !

    • @Damientrades
      @Damientrades 3 года назад

      Deffo RUclips AI reading your mind maybe it was Alexa

    • @tanmaypatel4152
      @tanmaypatel4152 3 года назад

      @@Damientrades I was already subscribed to Tim so I got the notification :)

    • @melodyparker3485
      @melodyparker3485 3 года назад

      I'm pretty sure that Corey Schafer also has a good tutorial about beautiful soup.

  • @dbstudio7859
    @dbstudio7859 2 года назад +6

    def amazing():
    while 1:
    print("Thanks Tim")
    amazing()

  • @hydrocrazynik76
    @hydrocrazynik76 3 года назад +14

    Such a great tutorial! I usually don't comment but this was absolutely spectacular. Thank you so much!

  • @sampsondzameshie-sb3ek
    @sampsondzameshie-sb3ek Год назад +1

    Hi, l love all your videos boss.
    Thank you very much.
    I do not have an IT background but fell in love with your videos and started studying Software development in school right now.

  • @kristaandrews3405
    @kristaandrews3405 2 года назад +1

    I'm using Anaconda, so had to use different import information. You explained this better then any video I've watched.

  • @igordc16
    @igordc16 3 года назад +7

    Straight forward, simple explanations , easy to follow. Thanks Tim! You're a excellent teacher, keep up the great work you're doing here on youtube.

  • @garybenhart
    @garybenhart Год назад +13

    Unfortunately, the code mentioned in the video at 13:15 no longer seems to work, probably because NewEgg no longer allows a Python script to download the htlm from web site pages. It seems to me that most web sites are being "bot protected" today, a problem that is specifically mentioned by Tim in the video at 11:25. This points to a very significant problem when you consider using a tool like Python to web scrape, because using standard Python code is not ever going to work.
    Finally, when you do get lucky and get your Python code to web scrape, that code that works perfectly today will probably not work very long.

    • @AsuGhimire
      @AsuGhimire Год назад

      real, its a struggle to learn when you're trying to debug and its just privacy policies in your html files xD

  • @selo2410
    @selo2410 3 года назад +3

    THANK YOU, I've been waiting for you to make a tutorial on this for some time now, thanks again.

  • @acutisnasus7217
    @acutisnasus7217 2 года назад +1

    8:26 Oh nooo,... you're in the matrix. You glitched!!!
    Top tutorial!!!

  • @MrBobman82
    @MrBobman82 3 года назад +1

    Tim I just started scraping with BS4 THANK YOU!

  • @neroplus-it
    @neroplus-it 3 года назад +4

    your videos on web scraping motivated me to create my own video-series about this topic(s)! as always, great content! thanks for sharing your knowledge.

  • @Khyreemlb
    @Khyreemlb 3 года назад +2

    Amazing stuff man. You got yourself a new sub. Thank you for all of the content and hard work. I've been benging all of your videos like I was watching Netflix lol

  • @namename-cl8kk
    @namename-cl8kk 3 года назад +3

    Finaly the best timing ever i was waiting it plz speedeun that series

  • @thec-m
    @thec-m 2 года назад +8

    This was a really useful tutorial and it was clear to understand, unlike some of the other videos I found. Thank you! I'm sure there are many people out there like me that find themselves trying to slightly improve their code, resulting in learning how to use some new massive python library like this.
    Back to the video: I think it would have been good to replace the URL at the end of the video with another NewEgg listing to show the same code extracting a different price (assuming the tags are the same). Also, looks like you forgot to edit out the part at 8:24.

  • @oskarwallberg4566
    @oskarwallberg4566 2 года назад +1

    Beautiful video man! Just realised how pedagogical and well dispositioned you videos are.

  • @Mallan_
    @Mallan_ Год назад

    Many thanks. I was struggling with scraping some links from a page but couldn't until I watched this video.

  • @as_below_so_above
    @as_below_so_above 3 года назад +3

    Great video and great timing to put it out! I had to use BeautifulSoup for the first time just last week and this was great at solidifying everything I learned!

  • @toshitsingh7270
    @toshitsingh7270 3 года назад +3

    As always your tutorials are super educational and also thanks teaching it for free, it really helps.

  • @七人の侍-b1q
    @七人の侍-b1q 3 года назад +30

    "Dummy html file"
    The html file who is trying his best: 😿👍

  • @nightwind132
    @nightwind132 3 года назад +1

    god that 3080 price gave me stress of when I was hunting down my own. Great tutorial btw it's been a great help!

  • @wlqpqpqlqmwnhssisjw6055
    @wlqpqpqlqmwnhssisjw6055 3 года назад +1

    I am good in Bs4 But I just came to give you like .For your work

  • @BonVoyageWorld
    @BonVoyageWorld Год назад

    you should have more than "just" 1,18m subscribers. thank you Sir!

  • @ChrisOfTheOutdoors
    @ChrisOfTheOutdoors 2 года назад +3

    Anybody know why I would be getting "IndexError: list index out of range" on line 10 - "parent = prices[0].parent" at the 15:29 minute mark in the video? I've copied the whole code exactly.

    • @abssdabss
      @abssdabss Год назад

      make sure your url is correct

  • @davevanemmenes27
    @davevanemmenes27 2 года назад

    Congrats on your 1 million,
    All the best

  • @derelictmanchester8745
    @derelictmanchester8745 Год назад +1

    Love your channel Tim, the best tutorial ever..

  • @loisvallee7291
    @loisvallee7291 3 года назад +2

    need this to access my uni's timetable more easily, thanks man !

  • @Said664016
    @Said664016 2 года назад

    The best tutorial ever! You're saving my life!

  • @philippededeken4881
    @philippededeken4881 Год назад

    Great video. Thanks to you, I'm starting a new business in the tyre industry.

  • @lucaskellerlive
    @lucaskellerlive 8 месяцев назад

    Do you have an availability if I paid for you a Zoom call? I watch your videos all the time and I'd really appreciate if I could hop on a Zoom at your hourly rate to answer a few specific questions. Thanks for everything!

  • @prodigyprogrammer3269
    @prodigyprogrammer3269 3 года назад +2

    8:23 did you forget to edit 😂😂
    love your videos BTW ❤️

  • @Spleed7887
    @Spleed7887 3 года назад +11

    Dude, I think you should do more C++ tutorials. They're really good!

    • @elpython3471
      @elpython3471 3 года назад +1

      I second this. Those tuts are good!

  • @PeterPankowski
    @PeterPankowski 10 месяцев назад

    Excellent done for a first example! Amazing explained!

  • @keifer7813
    @keifer7813 2 года назад +1

    8:25 It's always fun seeing bloopers mid video lol

  • @GeneralCA-k9l
    @GeneralCA-k9l 4 месяца назад

    i show this video after two years thanks pro❤❤

  • @markslima1557
    @markslima1557 2 года назад +1

    Thank you this video is so straightforward I think I finally got the hang of this

  • @rahulxdd
    @rahulxdd 3 года назад +8

    Thank you Tim. I always wanted to learn Beautiful soup for personal projects but never did. Today is the first time I watched a tutorial on this topic. Anyway, how long will this series be? Can't wait for the next part.

  • @tieutantan9562
    @tieutantan9562 3 года назад +1

    This serial is my need. Thank Tim!

  • @andrealcantara1437
    @andrealcantara1437 2 года назад +17

    I'm trying in a different website. I can get the HTML, but when I try to look for specific texts it doesn't work, I always get an empty list, even though I can see that there is that text in the page.

    • @labscience8271
      @labscience8271 2 года назад +2

      Same problem. Did you find a solution?

    • @hamzayunusa2224
      @hamzayunusa2224 2 года назад

      @@labscience8271 did u find one?

    • @abdulrahmanal-saadani8769
      @abdulrahmanal-saadani8769 2 года назад +3

      I have the same problem but if you noticed in the video he said that some websites may block you when you try to script their html page so maybe the is the reason why you get an empty list

    • @DauvO
      @DauvO Год назад +1

      @@abdulrahmanal-saadani8769 I have the same problem.. but I think that if the html can be seen in the console in the previous steps, that means the robots haven't done any blocking? I would think if you can see the data that's gameover once you learn how to manipulate it.

    • @AnibalDellagiovanna
      @AnibalDellagiovanna Год назад

      For me it only work if you look for the hall test in the element. For ejemaple The full text" will not work for "full" or "The full". It only work if you search "The full test". You can test it with a local HTML file. Is not the web filtering it.

  • @proxyscrape
    @proxyscrape 2 года назад

    Great tutorial Tim! I appreciate the clear and concise explanations you provided.

  • @romanv4519
    @romanv4519 3 года назад

    Awesome tutorial. New to this channel, but I like your style Tim. Thanks a lot, very well explained!

  • @ezekomaugoo5569
    @ezekomaugoo5569 2 года назад

    Quite a very concise course and informative. Thanks for this guide.

  • @ScriptureFirst
    @ScriptureFirst Год назад

    outstanding walkthru, as usual, ty... I like the chapter divisions, concise talking, maximized screen, text size :)

  • @khiryshank4930
    @khiryshank4930 Год назад +6

    Anybody else having problems with bot protected sites? I finally got it to read on Wikipedia, but other websites return an empty string.

  • @anwar587
    @anwar587 3 года назад

    Web scraping is very useful trust me and of course beautifulsoup is the best library for this

  • @popey747
    @popey747 Год назад

    Wonderful to be learning Beautiful Soup with Kermit

  • @matrix26uk
    @matrix26uk 2 года назад

    1 quick point to add about BS4 not installing. Sometimes being connected to a VPN can stop modules being installed. Try dropping off the VPN and running Tims install commands

  • @script_tester-1
    @script_tester-1 9 дней назад

    this is great, good work!

  • @keifer7813
    @keifer7813 2 года назад +2

    8:09 Isn't nesting tags in HTML impossible? This part got me confused
    Also at 16:12, couldn't you just use parent.strong instead of parent.find("strong") ?
    Great video by the way

    • @josepholiver5713
      @josepholiver5713 Год назад

      I am running into this same exact issue. Not sure what to do and can't find a stack overflow forum that's helpful

  • @jacobfuller5643
    @jacobfuller5643 2 года назад

    super helpful for a project I am working on, thanks!

  • @prof.code-dude2750
    @prof.code-dude2750 3 года назад +1

    I wanted to create a BS4 project 😀 and you made a tutorial

  • @hmodexl
    @hmodexl 3 года назад

    ur explanations are very clear,thank for ur effort.

  • @RandyWatson80
    @RandyWatson80 2 года назад

    As always, this was super clear

  • @friday8118
    @friday8118 2 года назад +2

    How do we input the html or the website we want to scrape? Great video, thank you.

  • @julianaschmidt1059
    @julianaschmidt1059 2 года назад +1

    So useful! Thank you so much!

  • @FreAcker
    @FreAcker Год назад +3

    hey, just updating.
    find_all(text=) is deprecated
    switch to string= method instead;)

  • @mghostdog
    @mghostdog Год назад +3

    So when I run the script looking for the "$" on the site I'm parsing, I get an empty list [ ]. Does that mean that the website is preventing me from seeing that particular item/price?

    • @intelblox7354
      @intelblox7354 Год назад

      im getting a index error when i put prices[0]

    • @ethicalhacker9720
      @ethicalhacker9720 Год назад

      I think it is the website. I tried another website and it worked.

  • @wege8409
    @wege8409 3 года назад +1

    This reminds me of how some nights Grandpa and I would eat melty cheese in the mudroom. We laughed so much as cheese dripped down his face. I can still remember his laugh. It sounded like a hundred murders of crows filtered through a ring modulator. RIPO Grandpa please stop haunting my dreams.

  • @AmirRTR
    @AmirRTR 8 месяцев назад

    best guy on yt

  • @mmbaguette1520
    @mmbaguette1520 3 года назад +3

    Hey Tim, can you make a video on how to get a programming job? 👋

  • @tildesarecool7782
    @tildesarecool7782 2 года назад

    I was following along with this video and couldn't get it to work. Actually I was following along but with my public "all games" steam library page. I couldn't figure out why it wasn't work. I was losing my mind. Then I finally saw in the source this JavaScript block with formatted data for all my games. It's "DB Query" and also the JS appends the data to the DOM programmatically. So indirectly this video taught me why Beautiful Soup couldn't find the tags I kept searching for on the steam library page. Side note, anyone want to scrape their steam library for some reason (instead of using steam db or whatever) it's all there on that page as some kind of JSON.
    Good video btw.

  • @guy6567
    @guy6567 2 года назад

    Thanks Tim! :) awesome and helpful

  • @JanBadertscher
    @JanBadertscher 3 года назад +6

    Tried 3 BS4 tutorials, on 2 completely fresh environments (one native py3 the other one a jupyterlab environment) and find_all() always returns empty. Any ideas why this happens?

    • @ivanyosifov2629
      @ivanyosifov2629 3 года назад

      If find_all returns empty array that means what you're looking for is not in the document

    • @Xero_Wolf
      @Xero_Wolf 3 года назад +2

      @@ivanyosifov2629 I have the same issue and what I'm searching for is in the document. Even when I test with a simple html.

    • @ivanyosifov2629
      @ivanyosifov2629 3 года назад

      @@Xero_Wolf It might depend on the editor you are using. For some editors you need to give the file path as */index.html* or *./index.html*

    • @camplays487
      @camplays487 3 года назад +1

      @@ivanyosifov2629 For me, the .find("strong") returns NONE even though the print statement before it clearly shows strong tags, any idea what could be causing that?

    • @_n1c0l4s
      @_n1c0l4s 2 года назад +2

      I am using the find_all(text="something"), and it also returns and empty array... I know that what I am looking for it actually is in the document. Could the problem be something of how the html file is structured?

  • @Knuddelfell
    @Knuddelfell 3 года назад +1

    exactly needed this

  • @itssuperbaby2979
    @itssuperbaby2979 2 года назад +1

    Amazing tutorial, but one question - in 9:09 what does the [0] do? I tried running the code without that and there was a bug, but with it it worked perfectly fine. Im just wondering what the function of that is

    • @Mmmkay..
      @Mmmkay.. 2 года назад +1

      He accessed the first tag in the html file using index [0]. Supposed he used index [1] he would've accessed the second tag in the html file. He did something similar again @15:25 when he was locating the first parent tag of the price value. Hope that helps !

    • @itssuperbaby2979
      @itssuperbaby2979 Год назад +1

      @@Mmmkay.. Thank you so much!

  • @ayaanp
    @ayaanp 3 года назад

    I think Tim can read our minds 👀

  • @pokedreadhead6089
    @pokedreadhead6089 2 года назад

    So sick thanks for the video!

  • @b07x
    @b07x 3 года назад

    Thanks, this was easier than I thought

  • @learnwitharbia3477
    @learnwitharbia3477 Год назад

    Thank you so much for such valuable content

  • @filmedbyjulia124
    @filmedbyjulia124 3 месяца назад

    I liked this video, good content.

  • @zawadahmed5484
    @zawadahmed5484 3 года назад

    Keep on your beautiful contents

  • @renecro1007
    @renecro1007 6 месяцев назад +1

    Thank you! Now I can grab dad jokes from 20yo websites and send them to my friends!

  • @AmbiNerd
    @AmbiNerd 3 года назад

    wooo wooo thanks TIM huge help!

  • @hollowr9953
    @hollowr9953 3 года назад

    Interesting video, as always

  • @Popcorn_and_funny_moments
    @Popcorn_and_funny_moments 2 года назад +1

    hey tim great work , i need to learn how to do using python columns and boxes in visual studio code thanks very much .

  • @helixo185
    @helixo185 Год назад

    8:58 i tried bur it gives SyntaxError, what could happened?

  • @jamiemorrissey2858
    @jamiemorrissey2858 2 года назад

    Nice, good video, learned a lot

  • @simple-security
    @simple-security Год назад

    well played sir...well played.

  • @chukwudifrancisawulor883
    @chukwudifrancisawulor883 4 месяца назад

    Thanks Tim 🎉

  • @SmeeUncleJoe
    @SmeeUncleJoe 2 года назад +2

    I tried your code but on a different webstore website. Definitely has "$" signs on the page and i was, with modification able to print the whole HTML code out via prettify, but I get this error : "IndexError: list index out of range" on the line "parent = prices[0].parent". Any ideas ?

    • @SadMark011
      @SadMark011 2 года назад +3

      I am having the same problem, I even inspect element copied the $ sign from site incase there was unicode problem but it still didn't work

    • @CrawdadSoftware
      @CrawdadSoftware Год назад

      maybe website have bot protection?

  • @jalepenofatty6704
    @jalepenofatty6704 2 года назад

    great video , hit a bunch of roadblocks with the imports and environment %PATH% changes i had to make, and then the openssl issue, but yeah took me a day to get thru this and finish, i appreciate the hardwork.

    • @unpatel1
      @unpatel1 2 года назад +1

      Glad that you finally solved your problem! Me too, had a hard time with the %PATH% thingy!@$$. Time to time %PATH% problem appears from no where and eats up lots of my time. I have worked a little with R and I found it relatively simple and easy in this aspect, package installation and management etc...

    • @andrews9168
      @andrews9168 2 года назад +1

      @@unpatel1 another workaround is to use pycharm

    • @unpatel1
      @unpatel1 2 года назад +1

      @@andrews9168 Thank you for your suggestion. I do have pycharm but not using it, just use VS Code all the time. I will definitely try pycharm.

  • @thesocksv2483
    @thesocksv2483 2 года назад

    Thanks you a lot, you're the best.

  • @extropiantranshuman
    @extropiantranshuman Год назад

    the camera angle alone is increasing my intellect

  • @mousemeister
    @mousemeister 2 года назад

    nice editing job and content ofc thx

  • @greening6904
    @greening6904 3 года назад

    Tim you wont believe i was working on a meteo app and needed a parser thx

  • @THISISCHARISMATIC
    @THISISCHARISMATIC Год назад

    Absolutely great videos, I’m new to python and coding in general. Your content is really great and easy to follow, would this web scraping method work for finding stuff like meta data for songs ?

  • @laurasasso8798
    @laurasasso8798 2 года назад

    Perfect ! Thank you

  • @alagappank1242
    @alagappank1242 3 года назад

    Superb...🤩

  • @tomasoon
    @tomasoon 2 года назад

    Great video, but the most impressive thing is when you did this video the video cards price was $2613, and now it's $1549 in less than a year xD

  • @Zydres_Impaler
    @Zydres_Impaler 3 года назад

    Tim, please make a series or video fo the "requests" library.

  • @shawnbon17
    @shawnbon17 8 месяцев назад

    How do you use BeautifulSoup to find an element with a specific attribute? For example, There is a div whose data-label = "Lifetime access to course content" and doc.find_all() did not work for it. I had to use doc.select('div[data-label="Lifetime access to course Content"]'). However, I was then not able to do another find_all or select afterwards. It would pass an error. I'm finding scrapy and xpaths to be much simpler than Beautiful Soup, and I'm confused at how to chain in B.S.

  • @WasimAkram-of9iv
    @WasimAkram-of9iv 2 года назад

    Hi, This is very interesting, Thanks for sharing. I have one question, Can we scrape website industry? Like Any site which belonging to any category like, Automotive or healthcare etc.

  • @RonaldPostelmans
    @RonaldPostelmans Год назад

    Hi Tim, nice video, need stuff. have you any links to a tutorial of you or else someone, who has scraped websites that block scrapers?

  • @luisthesup
    @luisthesup 6 месяцев назад

    What if for some reason the price is not inside a predefined tag in let’s say 2 out of the 500 pages that I scrape, that’s going to cause an error right? Or do all pages follow the same format?

  • @chrisa1234
    @chrisa1234 Год назад

    How come doing 'parent' on the $ sign returned so many layers of tags? Why was it not just the tags directly surrounding?

  • @siamahmed8287
    @siamahmed8287 3 года назад +2

    Can you make tutorial how I can scrape a dynamic web page? Like built with react.?

    • @rog_shakhyar6171
      @rog_shakhyar6171 3 года назад

      it would be same

    • @omarciano42
      @omarciano42 3 года назад

      That would only be possible with Selenium, which Tim has a series on, just search it