Find and Find_All | Web Scraping in Python

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 35

  • @babyniq08
    @babyniq08 Год назад +50

    I used to binge watch Netflix, now I'm binge watching all your videos. Thank you, Alex for all your amazing videos!

    • @AlexTheAnalyst
      @AlexTheAnalyst  Год назад +6

      Glad you like them!

    • @thepinner61
      @thepinner61 11 месяцев назад +1

      @@AlexTheAnalyst Thank you so much! Made my day

    • @ParalegalEagle-d7q
      @ParalegalEagle-d7q 2 месяца назад

      #Me2 kinda sorta ….🎉🎉🎉🎉😂😂😂😂😂

    • @hayatism1496
      @hayatism1496 17 дней назад

      that is so funny 🤣🤣. Thank you for this laugh...

  • @MichaelDavid-y3y
    @MichaelDavid-y3y 2 месяца назад +1

    I remember watching this a few years ago when starting my journey, it was the best tutorial I have watched ever since, I am currently a senior engineer

  • @VennisaOwusu-Barfi
    @VennisaOwusu-Barfi 11 месяцев назад +2

    I am pretty new to data analysis and I was working on a project where I would need to scrape data from a website and this tutorial has been so helpful! I spent hours trying to figure it out and the other tutorials on RUclips don't explain anything or skip steps and so it's hard to learn and personalize it for your own project.
    This however was detailed and straight to the point! Thank you so much. You're a lifesaver!

  • @shahrukhahmad4127
    @shahrukhahmad4127 Год назад +6

    I tried learning web scrapping atleast 5 time and failed everytime. But you made everything simple and handy, please please its a request from my side to resume this playlist and teach basics to advanced scrapping using python. I cant be able to learn without you, thank you inadvance and waiting for your more videos in same playlist Alex.

  • @stephenwagude9330
    @stephenwagude9330 Месяц назад

    I am watching the entire series, and I must say am really Enjoying python. It has a lot of use in my day-to-day work even as I am thinking of transitioning to data analytics.

  • @ENTJ616
    @ENTJ616 Год назад +2

    Mate, you are out of this world.

  • @franciscoflor6125
    @franciscoflor6125 Год назад +4

    You are the best, your videos have really helped me a lot.
    But this series of Web Scraping videos has been like you were reading my mind. I was thinking of doing a project on my own, but the only way to get the database is through Web Scraping.
    Waiting for the next video, one of the questions I have is the procedure to continue if I want to extract information from the hockey teams but from page 2,3, etc.

  • @nnamdiLdavid
    @nnamdiLdavid Год назад

    Thanks for all you do Alex. Can you be so kind to continue this series, especially for advanced scrapping, like scrapping from unstructured data etc

  • @katcirce
    @katcirce 3 месяца назад

    Thank you for this! Awesome starting point for my nlp project!

  • @chu1452
    @chu1452 Год назад

    as a Informatics Engineering graduate, this is easier to me to understand since we've learnt html back then

  • @meryemOuyouss2002
    @meryemOuyouss2002 Год назад

    Thank you ,I also finished this playlist

  • @ErenKıraç-g5m
    @ErenKıraç-g5m 4 месяца назад +1

    you don't need to use find function to get text, just try soup.find_all(arguments...)[x].text.strip() . You can write 0,1,2,3.... for x depending on which data you want. for example in 10:15 for x=1 the data text must be "Year". because 1 is the second index in python after first index 0

  • @kajal648
    @kajal648 10 месяцев назад

    Thank you so much sir I was caught up in a problem but I was able to solve after watching this video.

  • @mxdigitalmediamarketplace
    @mxdigitalmediamarketplace 9 месяцев назад

    Hello, thank you for your tutorial, great info. What editor do you use?

  • @jmc1849
    @jmc1849 8 месяцев назад

    Hi Alex (as if!)
    Thanks for all the content

  • @kaliportis
    @kaliportis Год назад +1

    Hello, I commented on one of your previous videos enquiring about the offer you had made in one of your "How to Build a Resume" videos, concerning resume reviews. I completely understand if that is no longer the case, considering that video was 3 years ago, but if you still are reviewing resumes I would to send mine to you. Have a nice day and congratulations on hitting 500k.

  • @LavanyaGopal-py6jd
    @LavanyaGopal-py6jd 7 месяцев назад

    Hello, thank you so much for this wonderful tutorial. However, I have one doubt that needs clarifying. So I tried this code out with the same set of codes and Url you have used but there seems to be a problem in this line -> print(Soup.find_all('p',class_="lead")). the output for this line shows [ ] .. which isn't the paragraph from the website. How do I rectify this problem? also, I use IDLE for Python. Once again your videos are awesome and I hope you continue making more great coding content.

  • @ArisingProgram
    @ArisingProgram 8 месяцев назад +1

    Hey Alex,
    I'm trying to grab text that is randomly generated from Random Word Generator website for my hangman project. Problem is that the text I grab isn't displayed in HTML it's always displayed as loading... What new techniques can you teach us on how to grab this data thanks!

  • @Kaura_Victor
    @Kaura_Victor 8 месяцев назад

    Thanks, Alex!

  • @tristanmoller9498
    @tristanmoller9498 9 дней назад

    Thanks!

  • @monsieurm2904
    @monsieurm2904 11 месяцев назад

    Where we can find the same notebooks page you use during all the video ? :)

  • @geoffreycg5650
    @geoffreycg5650 9 месяцев назад

    Is there a next video in the series?

  • @ShivaSunkaranam-qx3jf
    @ShivaSunkaranam-qx3jf 8 месяцев назад +1

    if i type soup. Find('div') .. nothing displays. But thats available on script

  • @mohammed-hananothman5558
    @mohammed-hananothman5558 2 месяца назад

    .find_all(...).text does not show the '
    ' on my pc even though you could see the escape character at work.
    is there a setting i could use to show these characters so I can clean the text easily?

  • @Syrviuss
    @Syrviuss Год назад

    Is it work only with static pages? Not like amazon or any shops ? There are some problems with past toturial when we try make Amazone Web Screping Using Python, how can we know the differences ? Thank for all your videos ;)

  • @DeltaXML_Ltd
    @DeltaXML_Ltd Год назад

    Interesting video, keep it up!

  • @amirsec
    @amirsec 20 дней назад

    html.parser is correct bro

  • @rockcaesarpaper291
    @rockcaesarpaper291 Год назад

  • @elphasluyuku4167
    @elphasluyuku4167 Год назад

    Hey guys i am getting 'SSLCertVerificationError' can anyone kindly help me resolve this?

    • @vahidmehdizade5781
      @vahidmehdizade5781 Год назад +2

      You can fix this with these lines of code. It typically occurs because there is an issue with the SSL certificate verification during an HTTPS connection. When the SSL certificate of the remote server cannot be verified.
      requests.packages.urllib3.disable_warnings()
      page = requests.get(url, verify=False)