Selenium Headless Scraping For Servers & Docker
HTML-код
- Опубликовано: 26 окт 2023
- In this video we learn how to do web scraping with Selenium in a server or container environment.
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
📚 Programming Books & Merch 📚
🐍 The Python Bible Book: www.neuralnine.com/books/
💻 The Algorithm Bible Book: www.neuralnine.com/books/
👕 Programming Merch: www.neuralnine.com/shop
💼 Services 💼
💻 Freelancing & Tutoring: www.neuralnine.com/services
🌐 Social Media & Contact 🌐
📱 Website: www.neuralnine.com/
📷 Instagram: / neuralnine
🐦 Twitter: / neuralnine
🤵 LinkedIn: / neuralnine
📁 GitHub: github.com/NeuralNine
🎙 Discord: / discord - Наука
Good man. This was super helpful. Easily saved me 5+ hours of searching around.
This video tutorial came at the perfect time. I'm currently working on a project that scrapes from a docker container, and I've been struggling to find out how to make it work. Thank you NeuralNine.
Thanks a lot
struggled since yesterday to get selenium up and running inside docker.
this works perfect
It was very helpful thank you! I'll definitely keep this in mind ❤
Thank you from St. Petersburg! Your video helped me a lot in my automation work project. Now I can continue to create the project.
Thanks for the awesome video! This is exactly what I needed for my project!
Realmente uno de los videos mas utiles de python y selenium. You're a Crack!
300K subscribers ❤🖤
Congratulation man.
All of your videos are always good and helpfull. keep it up.
Thankyou
"Great video man. Very helpful and well explained. Thank you very much!!!"
Thank you so much for posting this video, it solves exactly what was blocking me!
Great tutorial, thank you for your efforts !!
Thank you my bro! Works like a charm!
Thank you. This was really helpful.
Thanks, this is an incredibly useful video.
You saved me, you won a new subscriber.
Very helpful video!
Thank you
Useful utility - thanks !
thanks man, this is very helpful. Can you also create one for scrapy as well. What are the areas we should be concerned about when deploying a service that requires scrapy.
Awesome video. I run it as in the video and it worked!! thanks
This is great! Thanks!
There is another way that you can use a remote web drive(set up this in remote server with selenium official docker image). Then run the scarping part in remote.
can you provide some reference? i want to find out more
thanks! greetings from Brazil.
thanks bro! you solved my problem
Thanks a lot.
from Brazil
I have been trying to setup chrome and chrome driver for the docker image from past 5 hours, chatgpt got me swinging from one command to another, finally your 16 min video helped me thanks a ton. Alhamdulillah. AI cannot replace devs today I got a taste of it finally.
As salam aleykum which url did you use to dl google chrome th one in the video return a 404 ?
You might want to revisit the documentation and see that some modifications have been made. Thanks
.
Thanks man!
Great video! Quick question: if you need to scrape several pages from your website, is it possible to make it async and print the results as soon as selenium is done scraping each page as opposed to printing the whole thing after every page is scraped? If so, I would love to see a video on that topic.
Good information
Thank you so much for creating such helpful videos! Can you make a video on how to make a AI spotify playlist generator where each track seamlessly transitions from one track to another?
You are Genius
nice, would be nice if you did a video on seleniumbase using from seleniumbase import SB
The current set up i have works in a docker container when i have it running on windows but when i pull it to my ec2 instance on aws, it doesnt work,
it tries to go to the url to get the data but just takes forever then times out.
Thx bro
In the main file I was getting an error "Failed to send GpuControl.CreateCommandBuffer" when I ran the script locally. Adding the chrome_options.add_argument('--disable-gpu)' made the error go away. Just in case anyone else is running into that error message.
Wouldnt it be better to use Selenium Grid instead? So I can use the Grid as a driver instead of doing all that?
Hi, or several days I have been dealing with a problem that I cannot solve, I have a script that obtains the profile url, but in some profiles it does not work, I made sure that the selectors in both profiles are valid in addition to the html structure, I am running my code on a digital ocean server with linux without interface
can we use pupetter to do this? also make a video on pupeteer
Nice trick
I've followed the steps, and it works correctly on my PC, but when deploying it on AWS EC2, Selenium fails and doesn't scrape. Do you know what could be causing this?
Is it works with streamlit cloud?
since the 'new headless', for me, is not working anymore. Do you know how to make it work?
Does anyone know any good cloud options? I want my scraping script running 19 hours/day, and obviously thats expensive.
Hi,
Probably i didn't catch that information but why the selenium necessary ?
You can get the html content with Beautiful Soup ?!
selenium is handy for javascript heavy websites where you need a browser to execute the javascript to render parts of the site.
with beautiful soup you'll pull in the bare html by itself, and have to pull all the javascript seperate and execute it correctly.
updated! To bad it didnt work for the amazon.
Docker File isn't working for dependencies for chrome aren't getting installed over docker container
This doesn't work on my M1 Mac, any suggestions?
Good tutorial. Minor correction on verbiage at 9:55 its building a `docker image` from a Dockerfile. Then from image we run container using `docker run`
Wouldn't it be much easier to use firefox instead of chrome?
I have Python 3.11 on Windows 10. I'm just using a text editor to edit the Python program and I'm using a virtual environment in my cmd.exe shell. In this line "driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))" I get this error: "'powershell' is not recognized as an internal or external command, operable program or batch file. 'powershell' is not recognized as an internal or external command, operable program or batch file."
It looks like selenium supports Python 3.11 so that should be an issue. I also have Selenium 4.21.0.
Any idea how to fix this?
No github code?
Thx_.
Bro you dont need chrome driver? Why its work normally?
because using webdriver manager, automatically install chromedriver. I think so
dude how come you know everything :D
First comment bro ❤
Has anyone run into this error:
executor failed running [/bin/sh -c apt install -y ./google-chrome-stable_current_amd64.deb]: exit code: 100
Yes me, have you already found a solution? I still can't figure out what the problem is.