I remember watching this a few years ago when starting my journey, it was the best tutorial I have watched ever since, I am currently a senior engineer
I tried learning web scrapping atleast 5 time and failed everytime. But you made everything simple and handy, please please its a request from my side to resume this playlist and teach basics to advanced scrapping using python. I cant be able to learn without you, thank you inadvance and waiting for your more videos in same playlist Alex.
I am pretty new to data analysis and I was working on a project where I would need to scrape data from a website and this tutorial has been so helpful! I spent hours trying to figure it out and the other tutorials on RUclips don't explain anything or skip steps and so it's hard to learn and personalize it for your own project. This however was detailed and straight to the point! Thank you so much. You're a lifesaver!
I am watching the entire series, and I must say am really Enjoying python. It has a lot of use in my day-to-day work even as I am thinking of transitioning to data analytics.
You are the best, your videos have really helped me a lot. But this series of Web Scraping videos has been like you were reading my mind. I was thinking of doing a project on my own, but the only way to get the database is through Web Scraping. Waiting for the next video, one of the questions I have is the procedure to continue if I want to extract information from the hockey teams but from page 2,3, etc.
you don't need to use find function to get text, just try soup.find_all(arguments...)[x].text.strip() . You can write 0,1,2,3.... for x depending on which data you want. for example in 10:15 for x=1 the data text must be "Year". because 1 is the second index in python after first index 0
Hello, I commented on one of your previous videos enquiring about the offer you had made in one of your "How to Build a Resume" videos, concerning resume reviews. I completely understand if that is no longer the case, considering that video was 3 years ago, but if you still are reviewing resumes I would to send mine to you. Have a nice day and congratulations on hitting 500k.
Hey Alex, I'm trying to grab text that is randomly generated from Random Word Generator website for my hangman project. Problem is that the text I grab isn't displayed in HTML it's always displayed as loading... What new techniques can you teach us on how to grab this data thanks!
Hello, thank you so much for this wonderful tutorial. However, I have one doubt that needs clarifying. So I tried this code out with the same set of codes and Url you have used but there seems to be a problem in this line -> print(Soup.find_all('p',class_="lead")). the output for this line shows [ ] .. which isn't the paragraph from the website. How do I rectify this problem? also, I use IDLE for Python. Once again your videos are awesome and I hope you continue making more great coding content.
Is it work only with static pages? Not like amazon or any shops ? There are some problems with past toturial when we try make Amazone Web Screping Using Python, how can we know the differences ? Thank for all your videos ;)
.find_all(...).text does not show the ' ' on my pc even though you could see the escape character at work. is there a setting i could use to show these characters so I can clean the text easily?
You can fix this with these lines of code. It typically occurs because there is an issue with the SSL certificate verification during an HTTPS connection. When the SSL certificate of the remote server cannot be verified. requests.packages.urllib3.disable_warnings() page = requests.get(url, verify=False)
I used to binge watch Netflix, now I'm binge watching all your videos. Thank you, Alex for all your amazing videos!
Glad you like them!
@@AlexTheAnalyst Thank you so much! Made my day
#Me2 kinda sorta ….🎉🎉🎉🎉😂😂😂😂😂
that is so funny 🤣🤣. Thank you for this laugh...
I remember watching this a few years ago when starting my journey, it was the best tutorial I have watched ever since, I am currently a senior engineer
I tried learning web scrapping atleast 5 time and failed everytime. But you made everything simple and handy, please please its a request from my side to resume this playlist and teach basics to advanced scrapping using python. I cant be able to learn without you, thank you inadvance and waiting for your more videos in same playlist Alex.
I am pretty new to data analysis and I was working on a project where I would need to scrape data from a website and this tutorial has been so helpful! I spent hours trying to figure it out and the other tutorials on RUclips don't explain anything or skip steps and so it's hard to learn and personalize it for your own project.
This however was detailed and straight to the point! Thank you so much. You're a lifesaver!
I am watching the entire series, and I must say am really Enjoying python. It has a lot of use in my day-to-day work even as I am thinking of transitioning to data analytics.
Mate, you are out of this world.
You are the best, your videos have really helped me a lot.
But this series of Web Scraping videos has been like you were reading my mind. I was thinking of doing a project on my own, but the only way to get the database is through Web Scraping.
Waiting for the next video, one of the questions I have is the procedure to continue if I want to extract information from the hockey teams but from page 2,3, etc.
Thanks for all you do Alex. Can you be so kind to continue this series, especially for advanced scrapping, like scrapping from unstructured data etc
Thank you for this! Awesome starting point for my nlp project!
Hi Alex (as if!)
Thanks for all the content
as a Informatics Engineering graduate, this is easier to me to understand since we've learnt html back then
you don't need to use find function to get text, just try soup.find_all(arguments...)[x].text.strip() . You can write 0,1,2,3.... for x depending on which data you want. for example in 10:15 for x=1 the data text must be "Year". because 1 is the second index in python after first index 0
Thank you so much sir I was caught up in a problem but I was able to solve after watching this video.
Hello, I commented on one of your previous videos enquiring about the offer you had made in one of your "How to Build a Resume" videos, concerning resume reviews. I completely understand if that is no longer the case, considering that video was 3 years ago, but if you still are reviewing resumes I would to send mine to you. Have a nice day and congratulations on hitting 500k.
Thank you ,I also finished this playlist
Hello, thank you for your tutorial, great info. What editor do you use?
Thanks, Alex!
Hey Alex,
I'm trying to grab text that is randomly generated from Random Word Generator website for my hangman project. Problem is that the text I grab isn't displayed in HTML it's always displayed as loading... What new techniques can you teach us on how to grab this data thanks!
Hello, thank you so much for this wonderful tutorial. However, I have one doubt that needs clarifying. So I tried this code out with the same set of codes and Url you have used but there seems to be a problem in this line -> print(Soup.find_all('p',class_="lead")). the output for this line shows [ ] .. which isn't the paragraph from the website. How do I rectify this problem? also, I use IDLE for Python. Once again your videos are awesome and I hope you continue making more great coding content.
Interesting video, keep it up!
Where we can find the same notebooks page you use during all the video ? :)
Is there a next video in the series?
if i type soup. Find('div') .. nothing displays. But thats available on script
Is it work only with static pages? Not like amazon or any shops ? There are some problems with past toturial when we try make Amazone Web Screping Using Python, how can we know the differences ? Thank for all your videos ;)
.find_all(...).text does not show the '
' on my pc even though you could see the escape character at work.
is there a setting i could use to show these characters so I can clean the text easily?
html.parser is correct bro
Hey guys i am getting 'SSLCertVerificationError' can anyone kindly help me resolve this?
You can fix this with these lines of code. It typically occurs because there is an issue with the SSL certificate verification during an HTTPS connection. When the SSL certificate of the remote server cannot be verified.
requests.packages.urllib3.disable_warnings()
page = requests.get(url, verify=False)