Hey team, I hope you enjoy this intro to Selenium and BeautifulSoup, apologies about my volume levels in this one. I will have my audio sorted out by episode four I promise!
Thank you so much for this incredibly helpful video on web scraping! Your explanations were clear and easy to follow, and the examples you provided made it much easier to understand. keep up!
These videos are amazing and this gave me an idea on how to solve a problem I was having with my automation project. Also realised I need start using jupyter notebook for my web scraping /automation projects. Game changer!!
Just stumbled upon this page and absolutely loved the content man cant wait for this channel to hit 1mil just to say I was here at almost 5k. Please keep doing these types of videos! Simply amazing you got a new sub!
Thank you, I sometimes need a bit of time to think through problems haha. I've just dropped two episodes of my new end to end analysis, lots of thinking there. Appreciate the feedback!
nice tutorial thank you, wouldn't it be easier to search for the aria-label text in the video-title element & then split it with the comma ( , ) & apply calculation on time posted relativly to the time of execution & also you have more accurate views count in that tag too.
hello , and thanks for your effort , you're so humble , why don't you do any entertaining content since you enjoy the camera .... big thanks and wish you the best
Excellent tricks as usual, by the way I had to know about module called pafy , that gives easy ways to get videos parameters like author , period, views , etc.
@@MakeDataUseful but again , you need video id to pass it , to get all other attributes, and there is major github project called youtube-dl github.com/ytdl-org/youtube-dl but still need video url, so your job was great to scrape all channel videos urls and meta data
at 14:16 you used a find() to find all the videos titles in my understanding the find function should stop at the first title it finds how is that not the case in this example
I see all this as learning (and very good learning too). It’d probably be quicker to set up in Power Automate and then do some simple text manipulation directly in Excel, but then you don’t learn Python ;-)
Hello I got an error. I cannot find the solution for it. Please help me. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in 1 for _ in range(56): ----> 2 driver.find_element_by_tag_name('body').send_keys(Keys.END) 3 time.sleep(3) AttributeError: 'WebDriver' object has no attribute 'find_element_by_tag_name'
I got it. I looked into the selenium documents and found out theres are update driver.find_element(By.TAG_NAME, 'body') instead of driver.find_element_by_tag_name('body').send_keys(Keys.END)
Hey team, I hope you enjoy this intro to Selenium and BeautifulSoup, apologies about my volume levels in this one. I will have my audio sorted out by episode four I promise!
Thank you for the video, congrats on far surpassing your goal of subscribers :)
Sir, please use dark mode next time to save our eyes. Thank you.
RUclips has an open API where you can get all that info a lot more faster.
Can you provide link or adress to this API please
?
I thought that's what he was going to do. This is way more complicated than it needs to be.
Thank you so much for this incredibly helpful video on web scraping! Your explanations were clear and easy to follow, and the examples you provided made it much easier to understand. keep up!
These videos are amazing and this gave me an idea on how to solve a problem I was having with my automation project.
Also realised I need start using jupyter notebook for my web scraping /automation projects. Game changer!!
Just stumbled upon this page and absolutely loved the content man cant wait for this channel to hit 1mil just to say I was here at almost 5k. Please keep doing these types of videos! Simply amazing you got a new sub!
Thank you again. This is the 3rd episode I am following with you, I love this series.
Extremely good video. Very educational when following along and listening to your reasoning how to solve things and why.
Thank you, I sometimes need a bit of time to think through problems haha. I've just dropped two episodes of my new end to end analysis, lots of thinking there. Appreciate the feedback!
Thats the most useful python tutorial I ever watched
Jsut stumbled across your channel and I am loving it!! Please make more of these python for money videos!! Great work man post more pls!!
Thank you! Many more to come!!
35 $ the game isn't worth the candle 😂 thanks man for your great videos ❤
I love this series. Please keep making videos like this.
Another great video. I’ve started trying to get jobs myself in upwork. Thanks
That's great to hear Jeff!
How is it going?
Any luck??
nice tutorial thank you,
wouldn't it be easier to search for the aria-label text in the video-title element & then split it with the comma ( , ) & apply calculation on time posted relativly to the time of execution & also you have more accurate views count in that tag too.
Great job brother,your video made me your student...!
Great video!. Thank you so much!
Excelent, Great Motivation!
Thanks for the comment Daniel, great motivation too!
Easy to follow -- 100 episodes --- ha-ha. hee-hee !
hello , and thanks for your effort , you're so humble , why don't you do any entertaining content since you enjoy the camera .... big thanks and wish you the best
Great video, thank you for all
Great Video Man Thanks
Glad you enjoyed it
Man, thx for your videos, you are cool.
Excellent tricks as usual, by the way I had to know about module called pafy , that gives easy ways to get videos parameters like author , period, views , etc.
Oh awesome! I'll check it out and do a follow up video. Thanks for sharing!
@@MakeDataUseful but again , you need video id to pass it , to get all other attributes, and there is major github project called youtube-dl
github.com/ytdl-org/youtube-dl
but still need video url, so your job was great to scrape all channel videos urls and meta data
Seems like this should be worth more than $35.
another good idea is to use the youtube API instead of the web frontend.
There are many limitations while using the api
at 14:16 you used a find() to find all the videos titles in my understanding the find function should stop at the first title it finds how is that not the case in this example
I think this may be one of those videos.. entertaining though
On upwork for 10$ total, will apply 20_50 souls, just go to other language, django is ok
Hey, cool video. What is that IDE you are using?
Hey there I am writing code live in Jupyter notebook, checkout episode one of Make Money with Python for more details :)
@@MakeDataUseful that video i can't seem to find, any links?
Wait so do i need chrome for this to work?
The volumne in this video is quite low.
Thanks for the feedback, I think I’m corrected it in my most recent videos. Let me know!
Couldn't this be done more easily by using yt-dlp or something like that?
100% this could all be done in 1-2 lines of Python but where's the fun in that 😄
In convert_views :
else:
views = float(df['views'].split(' ')[0])
return views
I see all this as learning (and very good learning too). It’d probably be quicker to set up in Power Automate and then do some simple text manipulation directly in Excel, but then you don’t learn Python ;-)
your volume is very low, can't hear you, plz improve, Thanks
Hey thanks for the feedback, how’d I go on last nights upload?
speak up a little
Hello
I got an error. I cannot find the solution for it. Please help me.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
1 for _ in range(56):
----> 2 driver.find_element_by_tag_name('body').send_keys(Keys.END)
3 time.sleep(3)
AttributeError: 'WebDriver' object has no attribute 'find_element_by_tag_name'
I got it. I looked into the selenium documents and found out theres are update
driver.find_element(By.TAG_NAME, 'body')
instead of
driver.find_element_by_tag_name('body').send_keys(Keys.END)
driver.find_element_by_tag_name no longer exists in selenium, now it is find_element()
Also need to navigate past the Accept Cookies page.