Python Web Scraping with Beautiful Soup and Regex

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 277

  • @cetrusbr
    @cetrusbr 6 лет назад +457

    I like your tutorials because u go directly to the content, something rare in youtube these days...

    • @kalef1234
      @kalef1234 5 лет назад +19

      Hey guys what's up before we get started smash that subscribe button, like this share it i am giving away a fucking gift card follow the links to my merch watch my ads really helps thanks okay...roll that intro *45 second intro*

    • @sourabhch3044
      @sourabhch3044 3 года назад +2

      So true thank you for putting out the points which matters.

  • @mixalismcgamer3188
    @mixalismcgamer3188 4 года назад +19

    Dude i watched over 15 videos+ that was recommended and after hours i found this FULLY EXPLAINED.

  • @kalef1234
    @kalef1234 5 лет назад +13

    I felt so powerful as soon as I pulled an array of strings from a random website. Thank you for your great tutorial

  • @zigginzag584
    @zigginzag584 4 года назад +2

    It helps so much to have someone that matches your personality when learning stuff.
    I can't stand when asking someone for instructions on how to do something and they tell me everything
    that I can expect and every once i a while throw in the thing I'm supposed to do next.
    None of the fluff here. Just context. Every other creator would/has made this subject a 45min+ video
    but here I am feeling proficient after just 14 minutes with EM.
    Thank you, Sir!

  • @dilshand.5127
    @dilshand.5127 6 лет назад +14

    I was able to do this on another leaderboard site, appreciate your work here.

  • @PS3PCDJ
    @PS3PCDJ 4 месяца назад

    This is THE best beautifulsoup tutorial on the internet.

  • @estilen69
    @estilen69 6 лет назад +9

    Using CSS selectors is the way to go, gets rid of nested for loops and is more robust.

  • @xrefor
    @xrefor 5 лет назад +10

    Love this presentation. Straight to the point with short and specific explanation. Keep it coming! :)

  • @bhumikakhiyani4230
    @bhumikakhiyani4230 4 года назад

    I was struggling to navigate to iterate through second span tag in multiple td tags I.e. (tr[1:]/td[0]/span[1])
    I was trying it the whole day.
    This is the best tutorial I have seen.
    Thank youuuuu.

  • @robertpearson2143
    @robertpearson2143 6 лет назад +2

    Been doing something similar for a while but in a much more complicated way. Looking forward to making my life much easier. Thank you!

  • @CODTALES-KILLSTREAKS
    @CODTALES-KILLSTREAKS 5 лет назад +2

    Hey man! I watched this and applied the concepts to a weather site and made a csv of all the sunset / sunrises in 2019! Thank you! Please I love the way you explain things keep making videos sir! I have applied your teaching in a couple videos and it’s great! Learning so much!

  • @justinhamilton8647
    @justinhamilton8647 2 года назад +1

    Cheers man i used this tutorial to sort through 310000 embed links you’re so awesome

  • @kurdmajid4874
    @kurdmajid4874 3 года назад +2

    he makes it so quick and simple

  • @YeeYeez
    @YeeYeez 5 лет назад +2

    If only I had this tutorial a few years back. Good stuff.

  • @SusiEzhil
    @SusiEzhil 5 лет назад +7

    wow.. thats the crisp explnation,,, you're the man!!

  • @ladyViviaen
    @ladyViviaen 3 года назад

    was trying to scrape modarchive for my project, this is way better than writing the name and id down by hand lmao, thank you!

  • @enyoc3d
    @enyoc3d 5 лет назад +3

    in a sea of youtube tutorials yours is the pearl. thanks!

  • @worsethanjoerogan8061
    @worsethanjoerogan8061 5 лет назад +1

    Dude you're helping me out immensely with computer science courses

  • @susbedoo
    @susbedoo 5 лет назад +1

    You are the coolest tech guy I have ever seen on RUclips

  • @Lu3ck
    @Lu3ck 5 лет назад +2

    Your videos are fast but glorious! Love your content man! Thank you! Bless 🙏

  • @yanggao4878
    @yanggao4878 3 года назад

    Your videos are fast-paced and straight to the point. Thanks!

  • @impossible441
    @impossible441 6 лет назад +1

    This is remarkable, very informative and down to the earth - I really love this concise format of yours which is rather contradictory to what most of ppl on yt are providing

  • @mhalton
    @mhalton 2 года назад +1

    13:52
    Happiest man!

    • @EngineerMan
      @EngineerMan  2 года назад

      Oh god I'm not gonna be able to unhear that any time soon.

  • @clownboy84
    @clownboy84 4 года назад +1

    Thanks for the video. I like how you take the basics and break it down with really good and practical examples.

  • @rustyelectron
    @rustyelectron 5 лет назад +1

    This video is really a good intro to web scraping.

  • @johnbecker3116
    @johnbecker3116 6 лет назад +11

    I spent forever teaching myself this last week and now you post this. Kill me now

  • @arturmangabeira9990
    @arturmangabeira9990 6 лет назад +1

    EM you're awesome. i was studying web scraping and this come up. subscribed yesterday to your channel! lol

  • @ViniciusProvenzano
    @ViniciusProvenzano 3 года назад

    Real Nice content! Straight to the point. I’ve played around with beautiful soup a few years ago for an small project, and I just wish this video was around at the time....

  • @Omar-ic3wc
    @Omar-ic3wc 4 года назад +3

    Exactly what I needed thank you very much!!

  • @TomSilver_42
    @TomSilver_42 3 года назад

    Simply brilliantly explained. I have seen few of your videos and I like your style, therefore You have earned another subscriber.

  • @kennethmcquade4341
    @kennethmcquade4341 5 лет назад

    You're definitely skilled! For anyone watching these videos, don't get discouraged, this takes time. @Engineer Man , Can you talk about the experience of learning how at the beginning of your videos?

  • @ledosilverknight4619
    @ledosilverknight4619 5 лет назад

    Some of the best tutors are always straight-forward: down and dirty!

  • @stephenrochester6309
    @stephenrochester6309 5 лет назад +1

    These videos are brilliant. Thanks for all your hard work.

  • @DrSarge37
    @DrSarge37 6 лет назад +14

    It would be cool to see how to deal with pagination. So you want data from /page=1, /page=2 etc. Etc.

    • @joefagan9335
      @joefagan9335 4 года назад +4

      In your browser go to next page and copy the url of, say, page 2 and go to last to find the last page url. Use that as a template to build the url of each page you want. Loop over them in turn.

    • @joefagan9335
      @joefagan9335 4 года назад

      John Keymer nope you’re not parsing the page a second time to find the next button. You scrape the current page and then grab the neat page by creating the string for the next url and accessing the next page - just one grab per page.

  • @asdfasdfasdf383
    @asdfasdfasdf383 4 года назад

    You go straight to the point. Obviously, you know a lot more in-depth about this topic. Anyway, I like it.

  • @axelcano1623
    @axelcano1623 6 лет назад

    Really nice content! You explain just enough to be clear but not too much that's perfect. Please continue to remind the type of the elements you create, it's very important for beginners.

  • @qettyz
    @qettyz 5 лет назад +3

    These were really good examples, thank you!

  • @andriybortnik8310
    @andriybortnik8310 6 лет назад +2

    This is an awesome video, I actually enjoy the in depth walk through of what your reasoning behind writing code is, step by step. Versus just saying " I did this" and not really explaining anything. On a separate note , I'm looking to get into python, and I have previous code development experience, but It's been a little while, and setting up an environment to start doing some coding is a bit daunting. I'm looking to do more on the machine learning , neural networks side of things. I don't struggle with any of the logic, mathematics, but I know there are many pros/cons of various IDE's . Some have better support for various packages , etc.. I was wondering if you could either make a video on some of this information, or maybe throw a few pointers my way. I would really appreciate that. Otherwise, keep up the great content!!!

    • @KingEbolt
      @KingEbolt 6 лет назад +3

      Let me throw some pointers at you.
      0x3A738216
      0x6B321970
      0x88AC172B

    • @EluviumMC
      @EluviumMC 6 лет назад +3

      I've found that I really like using Microsoft's VS Code (not to be confused with Visual Studio). The IDE has a good clean interface, lots of extension support, and a built-in terminal.

    • @andriybortnik8310
      @andriybortnik8310 6 лет назад +1

      @@KingEbolt I can't even get mad at that... Well done

    • @camaulay
      @camaulay 5 лет назад

      @@EluviumMC +1 VS Code, switched from Sublime

  • @kristiyangerasimov6708
    @kristiyangerasimov6708 3 года назад

    Great video. Stuff like that makes me want to program and develop software until i die.

  • @EluviumMC
    @EluviumMC 6 лет назад +6

    Happy that you've chosen this topic. I've been exploring web scraping and have a script that works pretty well on a site that I frequent. Another awesome tool that can be used to also automate web navigation is the selenium package. But on more of a question-related note, I know the script you just made was pretty simple, and the one I have isn't that complicated, but I've been wondering how one would go about writing an object-oriented script for scraping?

    • @UchihaAditya
      @UchihaAditya 6 лет назад

      What are the advantages of selenium over Beautiful Soup?? I have a web-scraping assignment now and was advised to use selenium.

    • @EluviumMC
      @EluviumMC 6 лет назад +2

      Selenium can be used as a web scraper, but I use it more for web navigation and then use beautiful soup to actually get the data I need from the pages once they've been navigated to. I just find beautiful soup to be a more intuitive for extracting the data.

    • @yixunnnn
      @yixunnnn 6 лет назад

      With selenium it is like an automated user, and when you use it, you require a web driver, and you can choose if you want the automated browser to run in the background or not. I recently used selenium because I was trying to request for content behind a microsoft login page, which is loaded using javascript, thus I needed to wait till the content was actually loaded finish before i submit anything. Unlike requests, which instantly retrieves the page content.

  • @legioner304
    @legioner304 6 лет назад +2

    3 searches in the loop - very dirty )
    "The speed of software halves every 18 months"

  • @oromis995
    @oromis995 3 года назад

    This content is absolute gold.

  • @DirtySocrates
    @DirtySocrates 6 лет назад +2

    Excellent! Thank you!! Great vid!

  • @PriZ0nM1ke
    @PriZ0nM1ke 6 лет назад

    Wow these videos are awesome! Direct and concise but understandable!! Well done!

  • @K2ThaYo
    @K2ThaYo 6 лет назад +1

    Beautiful video man! Really valuable information here. As a sysadmin with over 10 years experience, I can state its really clean method of scraping. I was used to use bash scripts for everything but using libraries in python is sooo helpful. It would be a pain in the as in bash with awk, grep, etc. I hope to see more soon

  • @laxlyfters8695
    @laxlyfters8695 6 лет назад +8

    Went through a 30 second hillshire farms ad. Great match youtube

    • @EngineerMan
      @EngineerMan  6 лет назад +13

      Google knows you're into web scraping and sliced turkey lol.

    • @laxlyfters8695
      @laxlyfters8695 6 лет назад +1

      Engineer Man no lie came back and got an ad for $3 jack box munchie meals. RUclips thinks your fans are stone while watching your videos

  • @chowfatt38
    @chowfatt38 6 лет назад +52

    Great video again. I've been playing web scraping a while and I find that most of websites nowadays using javascript rendering quite heavy. Will you make a part 2 for talking about how to web scrape javascript rendering website? And what do you think about another web scraping package, Scrapy? thanks Man

    • @poidog22
      @poidog22 5 лет назад +2

      This would be a great follow on. +1

    • @cruzab3153
      @cruzab3153 5 лет назад +2

      Selenium is good and easy....

    • @trailrider6844
      @trailrider6844 5 лет назад

      +2

    • @tayfun6378
      @tayfun6378 4 года назад +1

      puppeteer does a good job these days I think

    • @Megaloplex
      @Megaloplex 3 года назад

      +100

  • @royslapped4463
    @royslapped4463 2 года назад

    this is perfect for what I needed thank you!

  • @luis96xd
    @luis96xd 4 года назад

    Wow, I liked this video so much! It was very useful! 😄
    You really have helped me a lot, it was well and fully explained, with real life examples
    Thank you so much for this tutorial! 👏👏

  • @chrisabreu7469
    @chrisabreu7469 6 лет назад

    your videos are a life saver man. keep up the great content

  • @ddmin3082
    @ddmin3082 6 лет назад +10

    Awesome video! Can you do one on the requests module please?

  • @Viruhemanth
    @Viruhemanth 5 лет назад

    carefully he's a hero

  • @grantfaith
    @grantfaith 3 года назад

    ty, saved me an hour of time from all these other videos. holy shit

  • @kylemichaelreaves
    @kylemichaelreaves 3 года назад

    Super helpful, thank you.

  • @syntaxis5584
    @syntaxis5584 5 лет назад +1

    why did you use 'View page source' instead of 'inspect' to find the page structure?

    • @EngineerMan
      @EngineerMan  5 лет назад +2

      I did it because view source represents the content that was delivered to the browser on load whereas inspect represents the content currently on the page. Since the scraper doesn't see anything dynamically generated, view source is best.

  • @supalistmain4882
    @supalistmain4882 5 лет назад

    @Engineer Man , what is your day job? And how did you get into coding? Do you have a CS degree? and.... well instead of more questions, rather just ask whats your background (ito what lead to you adding so much value with these vids)?

  • @virtualize2424
    @virtualize2424 3 года назад

    How do you scrape something like RUclips comments (without using RUclips api)? When I get the html data for a video using requests library, the video's comments are not their in the html data.

  • @xppaicyber3823
    @xppaicyber3823 4 года назад +1

    Great content

  • @MrFrondoso
    @MrFrondoso 2 года назад

    Génial. Dieu sait que je galère à utiliser BSoup . Et là j'ai l'impression d'avoir enfin compris.

  • @daltonkraklan2257
    @daltonkraklan2257 Год назад

    This was so freaking helpful

  • @JeroenTrappers
    @JeroenTrappers 6 лет назад

    Good video. Personally, i like using node with dom module and write css queries to extract what i want.

  • @DevastaingDj
    @DevastaingDj 6 лет назад

    Awesome! Kudos! Very helpful. Thanks man!

  • @blevenzon
    @blevenzon 6 лет назад +1

    Wow just found your channel by accident and I’m loving it. Awesome content!! Do you think you can do a vid on Elastic Stack?

  • @treybailey6752
    @treybailey6752 6 лет назад

    Great vid with fantastic content. Would love to see this where you first login in order to get content. Getting the headers set is a challenge.

    • @EluviumMC
      @EluviumMC 6 лет назад

      Using Selenium to do the site navigation to get you logged in is how I worked around getting into a site that requires login credentials prior to scraping.

  • @BrettKromkamp
    @BrettKromkamp 5 лет назад

    Excellent tutorial. Thanks.

  • @kingseekerbackup3085
    @kingseekerbackup3085 3 года назад

    I use requests and bs4. Never thought of using regex besides pattern searching

  • @NoorquackerInd
    @NoorquackerInd 3 года назад

    _I can't believe I used to use Selenium for this_
    At least for that project I rewrote it and used raw Requests when I found out my target could return data in JSON

  • @jacoboneill3735
    @jacoboneill3735 2 года назад

    Saw this, instantly thought this would be easy to implement to get amazon prices... bot blockers, who thought captcha would get me 😂

  • @princepeach_
    @princepeach_ 4 года назад

    I have an issue though, I’m web scrapping my stats from a website and when my stats update the webscrape doesn’t update it.

  • @Project_OMEG4
    @Project_OMEG4 6 лет назад

    Great video EM, but requests is not a built-in module for python; (does not come with the default python installation), so you will have to install it. For any missing library, the source is usually available at pypi.python.org/pypi/.
    You can download requests here: pypi.python.org/pypi/requests
    To install:
    • OSX/Linux : Use $ sudo pip install requests if you have pip installed. Alternatively you can also use sudo easy_install -U requests if you have easy_install installed.
    • Windows : From a cmd prompt, use > Path\easy_install.exe requests, where Path is your Python*\Scripts folder, if it was installed (for example: C:\Python32\Scripts\easy_install.exe).
    If you manually want to add a library to a windows machine, you can download the compressed library, unzip it, and then place it into the Lib folder of your python path. (For example: C:\Python32\Lib)
    NOTE: Mac OSX and Windows, after downloading the source zip, un-compress it and from the termiminal/cmd run python setup.py install from the uncompressed directory.

  • @KhalilYasser
    @KhalilYasser 3 года назад

    Amazing. Thanks a lot.

  • @JeanDAVID
    @JeanDAVID 5 лет назад

    I have difficulties to soup data with some tags in HTML files like . I use soup = BeautifulSoup(myfile, 'html.parser') and all the link tags turned out to be tranformed to . How come

  • @bennieliu3261
    @bennieliu3261 6 лет назад

    Awesome tutorial man! Can I suggest scraping dynamic pages as the next tutorial. Would be a sweet follow up

    • @EngineerMan
      @EngineerMan  6 лет назад

      Thanks. Part 2 of this is being requested a lot, I need to see what is best to do.

  • @stefandevos1520
    @stefandevos1520 6 лет назад

    love your tutorials man

  • @recitoprasidha5761
    @recitoprasidha5761 5 лет назад

    but how do we scrape the nowdays web that uses javascript framework that if we look at "view page source" we dont see html tag anymore. bcause it is wraped already with js

  • @affezippel7214
    @affezippel7214 2 года назад

    is there somebody who did extract the data of the golf website, like getting all the names, numbers and emails of the club contacts but without regex instead using beautiful soup? I'm stuck there and would appreciate some help. I also wrote my problem in the EM discord channel in python

  • @FreeDomSy-nk9ue
    @FreeDomSy-nk9ue 3 года назад

    How do I combine this with Login? For example, I want to log into my RUclips account and scrap data from my favorite videos url.

  • @ilobuhabib8325
    @ilobuhabib8325 Год назад

    love your tutorials.
    I tried following your method to scrape a site, but the output is empty. when I checked the 'tr' throughout the source code, it has values, but I do not understand why the output is empty.

  • @DrChrisCopeland
    @DrChrisCopeland 5 лет назад

    how would you modify this for nested div elements in place of table row and cell elements?

  • @gabrielh5105
    @gabrielh5105 4 года назад

    Why can't I find specific content on pages like whatsapp? I would like to fetch the name of a person, and I did as you said, by checking the source and getting the div class, but it simply doesn't appear in the soup

  • @ChrisAthanas
    @ChrisAthanas 3 года назад

    Thank you for a very clear

  • @zigabrus
    @zigabrus 3 года назад

    Top explanation, tnx!

  • @0xBerto
    @0xBerto 4 года назад

    Hey, question. Your python craigslist scammed video. Why turn off comments? I had questions. Now I’m in another video to ask about it. Basically just wanted to know if the person who’s DB you tried to flood would be able to just delete all post made from your IP address?

  • @ne12bot94
    @ne12bot94 5 лет назад

    Just wondering is there way to filter it and remove all the garbage that they send back? Idk?v😐v

  • @bed781
    @bed781 3 года назад

    Is there a scraping method that can read the javascript content generated?

  • @molimola3
    @molimola3 5 лет назад +2

    Hey I love you videos ! You explain everything so well. I am trying to scrape some websites but they don't allow me because of their bot protection... Do you have any tips about this ? Thanks

    • @user-bf7iz2tz1i
      @user-bf7iz2tz1i 5 лет назад +2

      I got this problem too. In my case I have solved that by changing a type of my request, now it includes *headers*.
      You need to look up for the data of your headers in your web browser. You should visit google.com page, press ctrl+shift+I, in opened console find "network", and search necessary elements there.
      In in the other words, the solution is adding "headers", I hope the information will help you.
      Example:
      headers = {"accept" : "your accept symbols",
      "user_agent", "your user agent string"
      session = requests.Session()
      request_variable = session.get(url, headers)
      P.S. I am from Russian, I was not using a translating while typing it, I hope you were able to understand me.

  • @siloenoah
    @siloenoah 6 лет назад +4

    Teach me your ways

  • @TheEndermanMob
    @TheEndermanMob 3 года назад

    How does he knows a lib for everything? i'am addicted to his videos.

  • @santiagorivera1562
    @santiagorivera1562 4 года назад

    What is the advantage to using Beautiful Soup over other webscraper packages with Python?

  • @NokiaN8Guides
    @NokiaN8Guides 5 лет назад +3

    thank you so much for this amazing tutorial, i would like to ask what do we do if the site i want to scrap require to be logged in btw this got recap

    • @joefagan9335
      @joefagan9335 4 года назад

      Usually, you can login first. Leave it open in your browser and scrape away.

  • @socksincrocks4421
    @socksincrocks4421 4 года назад

    Thank you for your video. Awesomesauce

  • @sgttye
    @sgttye 6 лет назад

    Keep up the good work man!

  • @jarodmorris611
    @jarodmorris611 5 лет назад

    Anyone know of any tutorials on how to escape parsed data for inclusion in a MySQL table? Been getting errors that I'm sure have to do with unicode to UTF-8 Conversion but I have had no luck in finding anything to show how to escape / encode text so it doesn't throw and error when inserting into MySQL.

  • @luis96xd
    @luis96xd 6 лет назад

    This is excellent! Well explained! :D

  • @johanneszwilling
    @johanneszwilling 5 лет назад

    😎👍🏼 Thank you, Sir!

  • @staynjohnson4221
    @staynjohnson4221 4 года назад +1

    The website ( umggaming.com/leaderboards ) now has cloudflare causing the request.get() to give 503 status_code. any solution to this ?

  • @metaphysicalconifercone182
    @metaphysicalconifercone182 5 лет назад

    Do you know any website or application that gives this sort of challenges? I have mostly learned python but in practice still don't know what to do with it yet.

  • @LarsHolmVV46
    @LarsHolmVV46 4 года назад

    That was beautiful not to say absolutely excellent. Man ,,,,,

  • @ozoikeobinna8116
    @ozoikeobinna8116 5 лет назад

    I was looking for a software to scrape emails from website and ended up here. I don't even know python and which software you were using. Where do i start now ?

  • @werecow68
    @werecow68 5 лет назад

    Wondering if you or someone else know why with Python 3.7 installed on Windows and using Visual Studio I get an error that the requests module is not installed? Obvious why I guess but where or how do I get the requests module? TIA

    • @werecow68
      @werecow68 5 лет назад

      Replying so others see if they have the same issue. In VS next to the Python 3.7(64-bit) click the icon to the right that is a package and you can install packages from there. :)