Python Selenium for Beginners - A Complete Web Scraping Project (Scraping Dynamic Websites)

Поделиться
HTML-код
  • Опубликовано: 13 дек 2024

Комментарии • 121

  • @ThePyCoach
    @ThePyCoach  2 года назад +7

    🚨 Make sure you install the same version of selenium I use in the video: pip install selenium==3.141.0 to avoid any deprecation message || 🔥Join my 8-hour Web Scraping course: www.udemy.com/course/web-scraping-course-in-python-bs4-selenium-and-scrapy/?referralCode=291C4D7FF6F683531933

    • @AliWaris-s8n
      @AliWaris-s8n 10 месяцев назад +1

      I not a rich man I am a intermedit school student so kindly recommend me a free web scraping course and I shall be great ful to you. and thanks a lot for sending me this link. May ALLAH live you long and happy life.

  • @emammadov
    @emammadov 3 года назад +21

    This is the best selenium tutorial video I've come across so far. Kudos to you, mate!

  • @yunusemresurmeli9980
    @yunusemresurmeli9980 2 года назад +2

    This is the best Selenium tutorial on youtube. Thank you for the tutorials.

  • @afiraaslam7020
    @afiraaslam7020 2 года назад +1

    this made web scrapping with selenium so easy for me Thank you

  • @suna2260
    @suna2260 2 года назад +1

    Don't give up mate, that was my first day to use soft soft and i will work on it for a long ti!

  • @husainbudi
    @husainbudi Год назад

    I've just watched this video. Morever, it's really powerful that helping me understand it. Thanks a bunch

  • @mauriciovalercampos6916
    @mauriciovalercampos6916 2 года назад

    Now I understand how to make loops and export everytNice tutorialng really god bless you.. your way of explaining simply aweso I loved it

  • @Libanes13sbo
    @Libanes13sbo 4 месяца назад

    Exceptional tutorial! congratulations!

  • @profeme
    @profeme 3 года назад +6

    Very nice video and a great introduction to Selenium and web scraping with python in general 🙂

  • @MukeshYadav-wb5uo
    @MukeshYadav-wb5uo Год назад

    your lecture is so very helpful. It is awesome love you from India.

  • @investinnnnn
    @investinnnnn 11 месяцев назад

    you are a legend mate, thank you

  • @MrWoodstock35
    @MrWoodstock35 3 года назад

    Great beginner video man. Appreciate it!

  • @herbilioan5273
    @herbilioan5273 2 года назад

    tank u for ur help, u saved my life

  • @ckks0nyoutube
    @ckks0nyoutube Год назад

    Thanks for this, helped me understand why .ui and select
    Need to understand .common , and .exceptions

  • @FarizAbdan
    @FarizAbdan 2 года назад

    Thanks!!! Great tutorial!
    With some code modification I can completely scrape dynamic website.

  • @SebastiánPosada-e2l
    @SebastiánPosada-e2l 10 месяцев назад

    Great tutorial, thanks man

  • @frederikbrokbrandi917
    @frederikbrokbrandi917 Год назад

    Great video! You are a great teacher

  • @juanchecaro1856
    @juanchecaro1856 2 года назад

    YOU ARE A GOD... NO QUESTION ABOUT IT

  • @harshinibhat3454
    @harshinibhat3454 2 года назад

    Thanks a lot Frank . This video helped lots

  • @muhammaddenaadryan2411
    @muhammaddenaadryan2411 2 года назад

    very good explanation, thank you !

  • @thandokuhlebrianmsane7043
    @thandokuhlebrianmsane7043 7 месяцев назад

    perfect lesson bro!

  • @ram_qr
    @ram_qr Год назад

    BRILLIANT!!!

  • @buraks.4463
    @buraks.4463 3 года назад +5

    This is cool. I've just read your medium article. Keep up the good work

  • @jacksonfrederick514
    @jacksonfrederick514 2 года назад

    hanks lot Sir.. You helping us..

  • @nadineuwurukundo6511
    @nadineuwurukundo6511 Год назад +1

    Thank you PyCoach. This video is very helpful. Can you please do us a video on how to scrape reviews from trip websites that are embedded in multiple pages?

  • @mishkathossain2984
    @mishkathossain2984 11 месяцев назад +1

    i think modern selenium works in this way.
    all_matches_button = driver.find_element('xpath', '//label[@analytics-event="All matches"]')
    all_matches_button.click()
    rows = driver.find_elements('xpath', '//tr')
    for row in rows:
    print(row.text)

    • @Cheg-h8d
      @Cheg-h8d 3 месяца назад

      Thank you !. Where do i find out more pls

  • @SEVENCRYPTOCHANNEL
    @SEVENCRYPTOCHANNEL Год назад

    You are super good, thank you sir ☺️

  • @sumudukanchana9089
    @sumudukanchana9089 2 года назад

    Great video. very very valuable video.

  • @appendix-q6k
    @appendix-q6k 10 месяцев назад +1

    What is my version is 121 of chrome?

  • @shazz007
    @shazz007 2 года назад

    One of the best videos with amazing teaching quality bro you are far better than many of the professors out there.
    In this video instead of Spain if i want to select and scrape informations about all the nations and their leagues is there any simple way?

  • @Tourismmj
    @Tourismmj 2 года назад

    Thank you so much for sharing!!!!

  • @0103442
    @0103442 3 года назад +1

    would be interesting to look at Web Scraping Project using the Scrapy framework

    • @ThePyCoach
      @ThePyCoach  3 года назад

      There’s one coming soon. Stay tuned!

  • @alisher9442
    @alisher9442 3 года назад +2

    Good my brother

  • @OyeroIbrahim
    @OyeroIbrahim 9 месяцев назад

    Thank you so much boss 😊

  • @benloder9316
    @benloder9316 3 года назад +2

    Hi Frank - great video for me as a newbie.
    I managed to get the csv file to create and save to my projects folder but some of the scores are represented as dates in the file, e.g. 02-Mar, 02-Jan. The data in the text editor is fine though.
    Is this just an issue with Excel saving the score column export as a mixture of 'General' & 'Custom'?
    Is there any solution to this?
    Thanks. [I'm using python 3.10 and Atom editor on Windows].

    • @ThePyCoach
      @ThePyCoach  3 года назад +2

      Oh yeah I remember about that. It’s just an Excel issue. You won’t have that problem when reading the file with Pandas.
      If you want to stick with Excel, you need to change the cell types so the scores aren’t seen as dates.

  • @aadii0911
    @aadii0911 Год назад

    Hey @ThePyCoach its a really nice vide. Can you share the cheat sheet too as the website in description isn't working.

  • @phamhung4042
    @phamhung4042 2 года назад +1

    Thank you.

  • @osamahugoal-hasan6576
    @osamahugoal-hasan6576 2 года назад +2

    How do I scrape from a list of URLs in .csv file instead of 1 URL only?

    • @ThePyCoach
      @ThePyCoach  2 года назад +1

      If you’re referring to the other video where I scrape with Pandas, then it’s simple. You only have to add a for loop and find a pattern within the urls

  • @sumpeter5439
    @sumpeter5439 2 года назад

    Hi Frank, I have a quick question. Is it possible to write a complicated python script that can interact with different kinds of websites for automation, or do we have to write script for every website different and script must be customized? thanks

  • @slickbaiter2010
    @slickbaiter2010 Год назад

    If you are using webdriver manager along with selenium then how would you append data?

  • @rje4242
    @rje4242 Год назад +2

    for some reason the latest version of selenium chose to get rid of the find_element_by_xpath and other methods and use a generic find_element with
    "id" | "xpath" | "link text" | "partial link text" | "name" | "tag name" | "class name" | "css selector" passed in as the first argument.

  • @d-rey1758
    @d-rey1758 Год назад

    any idea how to click on "a href" elements. web driver wait is being used, the element is present but it only works sometimes.

  • @classicmedia001
    @classicmedia001 6 месяцев назад

    How did you get that "path"??

  • @hipockt4
    @hipockt4 2 года назад +1

    Frank, the website you are using now has season 22/23 that is blank. You are showing 2021 season that is populated. I am assuming that is why your code works and mine does not. Is there a work-around for this?

  • @Nirrrr
    @Nirrrr 2 года назад +1

    after those 5 first lines, i keep getting errors saying executable path has been deprecated. It does open the website but keeps giving me that error in red. Not sure what to do.

    • @ThePyCoach
      @ThePyCoach  2 года назад +1

      I think it has to do with chromedriver. Download it again and leave it in the path you indicate.
      You can also try updating Selenium with pip.

  • @akhilyadav4245
    @akhilyadav4245 2 года назад

    Back when I used to use soft soft when I knew it kind of well I used the soft roll to make softs I thought it was just more effective

  • @nafayunnoor1180
    @nafayunnoor1180 Год назад

    Hey man. Loved your video. I have a request though, can you make something that can scrap any type of business at the location I provide on google maps.

  • @0103442
    @0103442 3 года назад +1

    thanks!

  • @akshykumar1118
    @akshykumar1118 2 года назад +1

    hello sir I followed your lecture and I am stuck at the one part of execution where it's for loop error displaying that' " WEBELEMENT" is not iterable '. Could you please help about this?

    • @ThePyCoach
      @ThePyCoach  2 года назад

      The code is working fine last time I checked. You can add a implicit wait -> time.sleep(5) to let the website load correctly in case you get empty data

    • @codechimps
      @codechimps Год назад

      Yeah, same here too. There's a few others in the comments getting this error as well.

    • @codechimps
      @codechimps Год назад

      Update, I forgot to add ".text" at the end of find element, this was giving me errors -> "find_element(By.XPATH, 'xpath string')" what I needed that fixed the errors find_element(By.XPATH, 'xpath string').text -Rookie mistake, lol great video!

  • @asmaesami1634
    @asmaesami1634 День назад

    Hi , thanks for sharing ,but the Python for Data Science Cheat Sheet is not working .

  • @MackieSheets
    @MackieSheets 11 месяцев назад

    Is there a coupon code for your Udemy Web Scraping course?

  • @akshathbharathi7376
    @akshathbharathi7376 Год назад

    Hello, I have a query.
    How to write one line code for clicking multiple web elements.
    For example is there a way to write like driver.find_element(By.XPATH, element_1).click().driver.find_element(By.XPATH, element_2).click().driver.find_element(By.XPATH, element_3).click() in ONE LINE?

  • @WildLionGames
    @WildLionGames 2 года назад

    I need help please. I cant even get past the drive path section in the beginning I keep recieving an error message. SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 0-1: truncated \UXXXXXXXX escape but Im copying as PATH from where I downloaded the Chromedriver
    would it be because I have Chrome64bit instead of 32 bit? if so how do I change that?

    • @jamesbooth3726
      @jamesbooth3726 Год назад

      just put the file in the same folder as your python scripts, much easier.

  • @yj2902
    @yj2902 2 года назад +1

    Is there a way to scrape a dynamic website without specifying the div or element as I’m coding it to scrape any dynamic website so each site has different div/element names, how would I scrape any dynamic website

    • @ThePyCoach
      @ThePyCoach  2 года назад +1

      I didn’t quite understand you, but the best way to find an element in dynamic websites is building an XPath. There are functions that you can use inside an XPath to overcome the typical challenges of dynamic websites

    • @yj2902
      @yj2902 2 года назад

      @@ThePyCoach so I want to use selenium and beautiful soup to scrape dynamic websites but instead of scraping a specific site by specifying their elements and div I want to be able to scrape any dynamic website. How would I scrape more than 1 site at once since they have different divs and element ids. I want to use beautiful soup and selenium to get all text on more than one dynamic site at once not just focusing on one site.

  • @xilllllix
    @xilllllix 3 года назад +1

    why do ppl sometimes use beautifulsoup with selenium? is there anything bs4 can do that selenium can't?

    • @ThePyCoach
      @ThePyCoach  3 года назад +1

      I can't speak for everyone but I used to use both beautifulsoup and Selenium because it was easier for me to remember the syntax of beautifulsoup when writing code and I felt that Selenium wasted more resources when scraping content, so I only loaded the Javascript driven website with Selenium and let beautifulsoup scrape all the data.
      That said, now I only use Selenium because there isn't anything beautifulsoup can do that Selenium can't.

  • @dracula_live24
    @dracula_live24 2 года назад

    Much love

  • @kfsman-xyz
    @kfsman-xyz 10 месяцев назад

    hello, i don't have the attribute, find_element_by_xpath, i only have find_element and find_elements

  • @amriayoub7394
    @amriayoub7394 2 года назад

    Why does the GMS tNice tutorialng doesn't soft sa way like it did in the video?

  • @alvarocp9246
    @alvarocp9246 Год назад

    I really liked your video, but I have an issue, when I try to run the program it says no such element: Unable to locate element: {"method":"xpath","selector":".//td[3]"} and I don.t know why, hope you can help me

  • @andyyy9521
    @andyyy9521 5 месяцев назад

    I have an issue where my chrome driver not automatically go to the web url.
    How can I make that so when I run my python code , chrome automatically go to url?

    • @japhethmutuku8508
      @japhethmutuku8508 4 месяца назад

      do you still have this issue?

    • @andyyy9521
      @andyyy9521 4 месяца назад

      @@japhethmutuku8508 yes

    • @andyyy9521
      @andyyy9521 4 месяца назад

      @@japhethmutuku8508 No. Instead download the chromedriver, I downloaded the Chrome. So it just me that downloaded the wrong file.

  • @investinnnnn
    @investinnnnn 11 месяцев назад

    instead of putting the table in a panda dataframe, why don't we just print(td1, td2, td3, td4) and create a list? Is it not easier - what's the point pleasE?

  • @vinshuthakur1892
    @vinshuthakur1892 2 года назад

    does this work on vs code?

  • @habagoodtime
    @habagoodtime 2 года назад

    does anyone know where i can get the pirated version of soft soft

  • @quiensoy9036
    @quiensoy9036 2 года назад

    Damn! You have a very cool voice.

  • @facelessbino
    @facelessbino 3 года назад +1

    Is compatible with python 3.10.0?

    • @ThePyCoach
      @ThePyCoach  3 года назад

      I’m not quite sure because I was running Python 3.8 when I made the videos and I haven’t updated so far.
      Please let me know if everything’s working fine with 3.10

    • @carltheyoda2155
      @carltheyoda2155 2 года назад

      I'm running 3.10.5 and it works just fine. Selenium is a bit slow, so give it 2 minutes after scraping and it will create the .csv file and show you the data frame. GREAT tutorial! Very clear and easy to follow.

  • @shehabtarek737
    @shehabtarek737 2 года назад

    how to get the data from python script and paste it in the search bar using selenium ?

    • @ThePyCoach
      @ThePyCoach  2 года назад

      I don't quite understand your question. If you mean type text on the search bar, you can do so by using the .send_keys method

  • @aestheticday8326
    @aestheticday8326 2 года назад +1

    Hey I didn't receive any mail from your side when I put in my name and email and submit it for the python cheat sheet.

    • @ThePyCoach
      @ThePyCoach  2 года назад

      That's odd. Maybe it went to the spam folder. Anyway I have 2 forms. If one doesn't work, try the other
      Form 1: frankandrade.ck.page/d3b1761715
      Form 2: frankandrade.ck.page/bd063ff2d3

    • @aestheticday8326
      @aestheticday8326 2 года назад

      @@ThePyCoach Yes the first one worked for me now, thank you so much!

  • @TheAlexander775
    @TheAlexander775 2 года назад

    Can't get past the first part, keep getting "handshake failed; returned -1, SSL error code 1, net_error -100" when trying to find elements

  • @Mister_E_or_Mystery
    @Mister_E_or_Mystery 2 года назад

    Unfortunately, all those find_elements do not show up for me...perhaps it's been deprecated?

    • @ThePyCoach
      @ThePyCoach  2 года назад

      Depends on which version of Selenium you have. If you have Selenium 3 the methods I use on the video should work, but if you have Selenium 4, you should use another method: .find_element(by=“”, value=“”). Check the cheat sheet for more info

  • @ankitagoyal4211
    @ankitagoyal4211 2 года назад

    Hi, I am getting this error: WebDriverException: unknown error: cannot find dict 'desiredCapabilities'
    (Driver info: chromedriver=2.25.426923 (0390b88869384d6eb0d5d09729679f934aab9eed),platform=Windows NT 10.0.19044 x86_64), Can anyone please tell me how to fix this?

  • @KonstantinosMilonas-s2g
    @KonstantinosMilonas-s2g 2 месяца назад

    try:
    # If the element is found, continue the loop
    if match.find_element(By.XPATH, "./td[@data-ng-if='dtc.isUpcomingFixture(match)']"):
    continue
    except:
    # If the element is not found, process the match normally
    pass
    add this inside for loop under the for condition,for the loop to ignore the upcoming matches tr and td which might cause an exception

  • @mbras2023
    @mbras2023 Год назад

    Cheat sheet is not available anymore... :-(

  • @mrmajnu2399
    @mrmajnu2399 2 года назад

    Determination is key, and reframing of tNice tutorialngs you view as complicated.

  • @anas.351
    @anas.351 2 года назад

    Can you please upload the cheat sheet again. The link is not working

    • @ThePyCoach
      @ThePyCoach  2 года назад

      I've just checked the link and it's working just fine 🤔

  • @PastorFran8
    @PastorFran8 5 месяцев назад

    i get a traceback time out error

  • @denisbaranoff
    @denisbaranoff 2 года назад

    👍👍👍

  • @Nolgath
    @Nolgath 2 года назад

    I just cant scrape all website, only returns me first result... im going crazy omg...

  • @AndresRodTrades
    @AndresRodTrades 2 года назад

    Yeh it cos with everytNice tutorialng. Use the free trial if u cant afford it. The free trial has no limits its the sa notNice tutorialngs changed

  • @merisbatak9097
    @merisbatak9097 2 года назад

    soft

  • @al823
    @al823 2 года назад

    I am getting error 'str' object is not callable. on the line match_date.append(team_match.find_element(By.XPATH('./td[1]').text())). can anyone advise please, I tried it with .text as well but no result

  • @zakariyaeaitmouh3216
    @zakariyaeaitmouh3216 Год назад

    Hello, I have a problem with the attribute 'find_element-by-xpath', he is toll to me the following error : date.append(match.find_element_by_xpath('./td[1]').text)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    AttributeError: 'WebElement' object has no attribute 'find_element_by_xpath' can you help me please ?

  • @samiurrahman5533
    @samiurrahman5533 2 года назад

    Every time at the instruction "website.get()" i get the selenium web tab and after fully loading that disappeared. Why is it happening?

  • @Nolgath
    @Nolgath 2 года назад

    the dot wont work.. name = n.find_element(By.XPATH, "./html/body/div[1]/div[4]/div/div[3]/div[1]/div/div[1]/div[1]/h3/a")