Thank you for watching! We hope you find this video helpful! Please leave a comment if you have any questions. If you are interested in web scraping and tutorial videos, subscribe to our RUclips channel: ruclips.net/user/Oxylabs
Hi, Thank's for this video! For me, in a dynamic sites, using selenium for a get page source, don't work ! Still responding in javascript tag's. The path of the server request and response is: browser request -> server response -> javascript response -> api response -> browser ? Thank's
Thank you for the feedback. It’s difficult to say without seeing the code, but have you tried using different web drivers? The situation might change if you add some actions to the page. Also, it might be handy for you to check the code in textual form: oxylabs.io/blog/dynamic-web-scraping-python.
Hello! Thank you for asking :) If there is a src= attribute, then you need to get the file content by doing additional request to the url defined in src= attribute. source = soup.find("script") link_to_file = source["src"]
Hey. That's a path to a chromedriver executable. Your question is hard to answer since we don't know what kind of error you are getting. If it's a File Not Found error, you would need to make sure that the path leading to the chromedriver is correct - try to use a full path instead of a relative one if it is failing for you. Also, if you're running on Windows OS, the chromedriver should have an .exe at the end. Hope this helps!
The presented method works on any and all types of web pages, both static and dynamic. As a page containing the reCaptcha is a type of a dynamic page, you can read, extract and manipulate data that is present on the page. :) Hope this helps you!
thanks a lot for all these content you constantly share. I would like to ask you something: this tutorial example works if I want to deploy it on the web as an api for consuming it after? thank you so much
Hey, we're glad you like our content! As for the tutorial, it's focused on showing how to build your own scraper. In case you want an easy way out, try Oxylabs Scraper APIs for free: oxy.yt/2iM
Hey! It defines a regular expression to match certain combinations of characters within a document. This one specifically is looking for whatever text thats between var data = and a new line.
Hello. We’re glad you enjoyed it! The h3 > a syntax just tells Beautiful Soup to get a tags that are directly beneath h3. You can try going to the link displayed in the video (librivox.org/search/?q=time%20machine&search_form=advanced), right-click on any book title and select “inspect”. This should open the exact place where you can see that a is bellow h3. Have fun!
Hello, help me please, how to get the text out "Wilson Tour Premier All Court 4B" soup = BeautifulSoup(html, 'lxml') title = soup.find('h1', class_='product--title') Tennis balls Wilson Tour Premier All Court 4B
Hey, thanks for asking! There are a couple of ways (without modifying your find function) 1.Retrieve a list of tag's children and select last one on the list. Strip the white space afterwards: title.contents[-1].strip() 2. Retrieve the whole text of a title, split it by a double space and select last string on the list: title.text.split(" ")[-1]
Excellent video. Quick question - when I click Ctrl+U on the website, my source page looks different. I don't have anywhere and I have separating each section. Does this matter or was "script" just used to locate the data needed?
Hello! If you're unable to locate in the source page of the website, do make sure that you visited "quotes.toscrape.com/js" (adding the `/js` at the end of the URL). This was important to ensure that you can follow the script parsing portion of the guide, as we then extracted information found in the tag. You can see the differences when comparing the source codes of "quotes.toscrape.com/js/" vs "quotes.toscrape.com"
Hi, it was very good. Thanks. But I'm facing a problem: at the line 13, it is reported to me that "NameError: name 'data' is not defined". Any idea how to fix it?
Hello! It seems that you’re trying to access a variable called data but it doesn't exist. Please double check the names for the variables you have defined. Also, there are couple of more scenarios when this error might get triggered - this article summarizes that quite well: www.geeksforgeeks.org/handling-nameerror-exception-in-python/
Thank you for watching! We hope you find this video helpful! Please leave a comment if you have any questions. If you are interested in web scraping and tutorial videos, subscribe to our RUclips channel: ruclips.net/user/Oxylabs
This is gold guideline. Literally covered most of the cases
We're very delighted to read this!
This is right on spot, most other videos are not even close to mention all
Wow. Great Video! I was looking for a video that highlights realistic and efficient web scraping and this is it. Thanks.
short detailed very informative, that's how a good tutorial is made
Thanks
Thank you so much!
Hi, Thank's for this video! For me, in a dynamic sites, using selenium for a get page source, don't work ! Still responding in javascript tag's. The path of the server request and response is: browser request -> server response -> javascript response -> api response -> browser ? Thank's
Thank you for the feedback. It’s difficult to say without seeing the code, but have you tried using different web drivers? The situation might change if you add some actions to the page. Also, it might be handy for you to check the code in textual form: oxylabs.io/blog/dynamic-web-scraping-python.
is this possible with website that requires a user input from the user for example adding a quantity or selecting a shipping service ?
Thank you for the informative tutorial! I will probably try web scrapping over the next month, so I'll comment here again if I have any problem!
Thanks for watching, and definitely reach out if you need any help!
why are you not using requests-html library? Seems to achieve the same in a simpler way
Good point, thanks for the feedback!
it's dead
can i use jupyter notebooks for what you just did?
Hello, yes, you can. However, it's mostly useful for practice or small-scale tasks. You can learn more here: oxylabs.io/blog/what-is-jupyter-notebook
How to get the data when the tag source is not None instead there is a file mentioned
Hello! Thank you for asking :)
If there is a src= attribute, then you need to get the file content by doing additional request to the url defined in src= attribute.
source = soup.find("script")
link_to_file = source["src"]
Can you sent the new version selenium 4 video for dynamic web scraping
We'll keep that in mind for our future videos :)
hi! I was not able to install the chrome driver, do you have any suggestion?
Hi! Could you specify why you couldn't install the driver? Was there an error message of any sorts?
6:14 line 10.
Is that pathing to the folder with chromedriver or the chromedriver.exe? Either way mine wont work
Hey. That's a path to a chromedriver executable. Your question is hard to answer since we don't know what kind of error you are getting. If it's a File Not Found error, you would need to make sure that the path leading to the chromedriver is correct - try to use a full path instead of a relative one if it is failing for you. Also, if you're running on Windows OS, the chromedriver should have an .exe at the end. Hope this helps!
Hello!
And why do all parsers analyze the same site? Interesting different approaches...
Thanks for the interesting example!
Thanks for this video. Never thought about to use F12 and Network-Tab to find the source of websites data. greetings
This method is impossible if the script have src especially reCaptcha right?
The presented method works on any and all types of web pages, both static and dynamic. As a page containing the reCaptcha is a type of a dynamic page, you can read, extract and manipulate data that is present on the page. :) Hope this helps you!
😭😭😭 Idk how to say thank you.. I've been searching for a help for this ajax stuff. this is the one I can say made my day
That's so sweet to hear! Glad you enjoyed it!
thanks a lot for all these content you constantly share. I would like to ask you something: this tutorial example works if I want to deploy it on the web as an api for consuming it after? thank you so much
Hey, we're glad you like our content! As for the tutorial, it's focused on showing how to build your own scraper. In case you want an easy way out, try Oxylabs Scraper APIs for free: oxy.yt/2iM
9:15, what does line 10 do
Hey! It defines a regular expression to match certain combinations of characters within a document. This one specifically is looking for whatever text thats between var data = and a new line.
Thanks for the video. Could you please explain where you took value 'h3 >a' for select at the end of the video?
Hello. We’re glad you enjoyed it!
The h3 > a syntax just tells Beautiful Soup to get a tags that are directly beneath h3.
You can try going to the link displayed in the video (librivox.org/search/?q=time%20machine&search_form=advanced), right-click on any book title and select “inspect”. This should open the exact place where you can see that a is bellow h3. Have fun!
Hello, help me please, how to get the text out "Wilson Tour Premier All Court 4B"
soup = BeautifulSoup(html, 'lxml')
title = soup.find('h1', class_='product--title')
Tennis balls Wilson Tour Premier All Court 4B
Hey, thanks for asking!
There are a couple of ways (without modifying your find function)
1.Retrieve a list of tag's children and select last one on the list. Strip the white space afterwards:
title.contents[-1].strip()
2. Retrieve the whole text of a title, split it by a double space and select last string on the list:
title.text.split(" ")[-1]
Excellent video. Quick question - when I click Ctrl+U on the website, my source page looks different. I don't have anywhere and I have separating each section. Does this matter or was "script" just used to locate the data needed?
Hello! If you're unable to locate in the source page of the website, do make sure that you visited "quotes.toscrape.com/js" (adding the `/js` at the end of the URL).
This was important to ensure that you can follow the script parsing portion of the guide, as we then extracted information found in the tag.
You can see the differences when comparing the source codes of "quotes.toscrape.com/js/" vs "quotes.toscrape.com"
Hi, it was very good. Thanks. But I'm facing a problem: at the line 13, it is reported to me that "NameError: name 'data' is not defined". Any idea how to fix it?
Hello! It seems that you’re trying to access a variable called data but it doesn't exist. Please double check the names for the variables you have defined. Also, there are couple of more scenarios when this error might get triggered - this article summarizes that quite well:
www.geeksforgeeks.org/handling-nameerror-exception-in-python/
THANKS ! You saved my life! :)
Happy we could help!
Thank you so much. much needed
We glad it helped!
Great video! Thanks!
Hi, could you demonstrate how to asynchronously request pages that require JavaScript rendering?
That's a great idea for a future video, we'll keep that in mind, thanks 😊
i'm soooooo appreciative of you😄
Thanks for your feedback. It's much appreciated!
It was very useful. Thank you!
We're very glad to you liked it!
This video helped me a lot.
Thank you!
Thanks for making this
Love the explanation, but also loved the music. Can you share the track id?
Hey! So happy you enjoyed it :) The track is this one: Purple Planet Music - Corporate Planning
Thank you so much. Great useful info.
Glad you liked it!
Hello. This is video seems very interesting and helpful but I need some more assistance if you can.
Hey! how can we help you?
Thank You
Awesome. Thank you very much.
Thank YOU for the support!
You saved my day
We're very happy to hear!
Awesome!
Glad to hear!
This is so perfect
Thank you!
Thank you very much Madam ...
You're welcome!
now doesn't work (crying~~~~~~~~)
Hey. Could you specify where exactly it's not working for you? Maybe we can help!
dekoju, naudinga!
The way she says html
Awesome!
Thank you!