Hey Brandon, I'm a Python noob. Let me start off by giving you a hearty thanks for sharing your code. I'm using a Linux operating system or distro and the output rendered dramatically different from the way it does in Windows. I had to paste the output into a word processor to see precisely the same results that you had. By now, I am already familiar with for loops.However, I'll need to study lines 14 to 29 to fully comprehend that segment of the program. Thank you again
Hi Brandon Harding, thank you very much for your video. This video helps me so much in learning how to scrape data from a web page. If it is ok, you make more a video save the data crawled into a Excel file. Have a nice day. 😀
Thank you! It’s my first one of these videos so it’s great to hear that. I have a couple more videos to make that will be a continuation of this video. The series after that will be about using ML to make predictions.
Hey @sambecker6024! Sorry for the delay. I’ve been building a web application for a startup using Python Flask and am looking forward to making a video detailing this project.
Hey, awesome video! Would like some insight into a problem I came across. My URLs end with (start=0&count=25 ,start=25&count=25....start=250&count=25) so I modified your code and when I printed the pages I did get a link for a different table. However, my final list contains only the 25 tr from the first count 11 times so 275 rows in total. I am only getting the first table 11 times but again my URLs lead to different tables. Would love some insight.
It sounds like your code is continuously fetching the first page. You can try something like this to debug and verify that you're generating the correct URLs: for start in range(0, 275, 25): url = f"example.com/path?start={start}&count=25" print(url)
Good question! This code scrapes the contents of an HTML table with class "tabMini tabQuotes". If you use another website, you'll need to inspect the HTML of that new website and identify the id, class, etc. of the elements that contain the data you'd like to scrape.
Hi there! I started out about a year ago by making a basic "to do" web app in Flask (there are tons of tutorials online). That will give you a solid foundation to build upon. I might make a short intro video if that's something you'd be interested in.
So does this scrape through the entire list of stocks listed on the nasdaq? And also to make your output a lot more visible , can we store this info on a csv or let’s I wanted to create another leaderboard for a seperate website of my own how can I be able to do that?
It really depends on what your end goal is. If you'd like to create a web app and deploy it to the cloud , Celery is a great option (docs.celeryq.dev/en/stable/getting-started/introduction.html).
@@brandonharding216 I want to run a bot that when I receive a telegram message from any channel the pot automatically copies it and send it to another telegram user if you can help me in this how to keep the Python bot listening for any new messages
Data that is freely available to the public and doesn't require authentication is typically considered fair game for web scraping. But always check the terms of service on the website you are planning on scraping.
Hey Brandon, I'm a Python noob. Let me start off by giving you a hearty thanks for sharing your code. I'm using a Linux operating system or distro and the output rendered dramatically different from the way it does in Windows. I had to paste the output into a word processor to see precisely the same results that you had. By now, I am already familiar with for loops.However, I'll need to study lines 14 to 29 to fully comprehend that segment of the program. Thank you again
Thank you so much for the feedback!
you made it so simple, Thanks
Glad to hear that!
Hi Brandon Harding, thank you very much for your video. This video helps me so much in learning how to scrape data from a web page. If it is ok, you make more a video save the data crawled into a Excel file. Have a nice day. 😀
For sure! I'll get working on that one 👍
That's really a good start...clear and simple explanation..all the best..please do more projects in Python..
Thank you! It’s my first one of these videos so it’s great to hear that.
I have a couple more videos to make that will be a continuation of this video. The series after that will be about using ML to make predictions.
Hey @sambecker6024! Sorry for the delay. I’ve been building a web application for a startup using Python Flask and am looking forward to making a video detailing this project.
Hey, awesome video! Would like some insight into a problem I came across. My URLs end with (start=0&count=25 ,start=25&count=25....start=250&count=25) so I modified your code and when I printed the pages I did get a link for a different table. However, my final list contains only the 25 tr from the first count 11 times so 275 rows in total. I am only getting the first table 11 times but again my URLs lead to different tables. Would love some insight.
It sounds like your code is continuously fetching the first page. You can try something like this to debug and verify that you're generating the correct URLs:
for start in range(0, 275, 25):
url = f"example.com/path?start={start}&count=25"
print(url)
Thank for sharing
No problem! Glad you liked it.
Thanks. I can’t believe I actually got this to work. How do I put the info in a cover file?
I had to look up what a cover file is. If you'd like the data in a csv file, you can use a library like 'pandas' to export the data to a csv.
Hay I'm new to coding and idk if it's me but the code only seems to work on the website in the video, any help would be appreciated thx
Good question! This code scrapes the contents of an HTML table with class "tabMini tabQuotes". If you use another website, you'll need to inspect the HTML of that new website and identify the id, class, etc. of the elements that contain the data you'd like to scrape.
Hello Sir! I am sure you are enjoying full health. I am new to python learning but very interested please guide me how can i begin .. thanks sir
Hi there! I started out about a year ago by making a basic "to do" web app in Flask (there are tons of tutorials online). That will give you a solid foundation to build upon. I might make a short intro video if that's something you'd be interested in.
@@brandonharding216 Thanks sir .. sure i will find those videos whenever i need help you will be a beacon for me.. thanks alot for your support.
So does this scrape through the entire list of stocks listed on the nasdaq? And also to make your output a lot more visible , can we store this info on a csv or let’s I wanted to create another leaderboard for a seperate website of my own how can I be able to do that?
This code will scrape all the data from the financial website in the video. This output can absolutely be saved to a csv (another video coming soon).
Creating a separate website that scraps this data is more involved but can be done. I would recommend using Flask (Python) for the website backend.
thanks how can I scrap this page automatically when the values in this page change should I run it in a server or something
It really depends on what your end goal is. If you'd like to create a web app and deploy it to the cloud , Celery is a great option (docs.celeryq.dev/en/stable/getting-started/introduction.html).
@@brandonharding216 I want to run a bot that when I receive a telegram message from any channel the pot automatically copies it and send it to another telegram user if you can help me in this how to keep the Python bot listening for any new messages
Extract the price, date and stock name from the tradingview platform after every left click and make a log file in CSV or excel format. is it possible
Anything is possible! This sounds like it would require some combination of JavaScript and Python.
If you can help me with some solution based on this it will be of great help or make a video out of it, this will be of great help@@brandonharding216
Is web scraping of free/open public data legal?
Data that is freely available to the public and doesn't require authentication is typically considered fair game for web scraping. But always check the terms of service on the website you are planning on scraping.
why is it that i am getting only one company ?
Can you share your code in a GitHub repository?