Great content! Two comments: instead of writing the scrapped info into the CSV again and again, you could create a 2D array and write each scrapped info into that array, and then at the end you'd write the array to the CSV in a single take. The outcome would be the same but I think it would be cleaner. Second comment: since you are scrapping an html table, I think you could do that directly with pandas, that is, you don't need beautiful soup. Thanks for the video, keep them coming. Cheers
Agree on the array, but was concerned trying to explain an array and a final dump might be too much. Interesting on pandas. Still newer to Pandas so didn’t know it could do it direct without soup parsing. Thanks!
Hey man, great video ! I am getting an error saying name "col" is not defined, and i think i properly copied the code verbatim, do you have any tips? also will this logic work for sports books?
Hmmm, seems that it may be getting set incorrectly. So you would have to define "cols" first, which would be what you are running the find all on. Depending what you are pointing too, it could differ. It could be a or a or even a . What this piece does it is groups all items of the html into the object "cols". Then you would have a nested loop "for col in cols:" which then is basically saying for each item of html found, do what is inside the loop. Are you able to copy and paste into a comment the bit of code where you have set col? Could you also let me know what url and data you are trying to reach? If you don't want to paste it here, you can also reach me on Twitter/X @wageredOnTilt
I am just using the inspect feature, which is in most browsers. So, if you are using Chrome, or Firefox, if you right click on the screen. The floating options will appear, and you click on inspect. That should open up the html on the screen.
Hello! first of all thank you for showing us how to do this. I followed your code down to the letter in an attempt to get the 2023 data from my csv list but I keep running into errors involving problems with playerlist['id"].iloc[x] and in__getitem__return self. I am wondering if there is a problem with my csv or that i dont have all the right packages installed.
You can scrape from books that have exposed data. You may need to log into the book to scrape the data, but you would need to be sure their site html structure matches what’s in your code.
@@wageredontilt1649 I'm a coding newbie. Probably take me a while to figure this out, but maybe someone has already done this and has shared their code publicly.
Likely it is point at a non-existing table, or has a bad ID. The other possibility is the URL built isn’t being found. What was the url you had the scraper trying to reach and what was the table ID?
Great content! Two comments: instead of writing the scrapped info into the CSV again and again, you could create a 2D array and write each scrapped info into that array, and then at the end you'd write the array to the CSV in a single take. The outcome would be the same but I think it would be cleaner. Second comment: since you are scrapping an html table, I think you could do that directly with pandas, that is, you don't need beautiful soup. Thanks for the video, keep them coming. Cheers
Agree on the array, but was concerned trying to explain an array and a final dump might be too much.
Interesting on pandas. Still newer to Pandas so didn’t know it could do it direct without soup parsing. Thanks!
Hey man, great video !
I am getting an error saying name "col" is not defined, and i think i properly copied the code verbatim, do you have any tips? also will this logic work for sports books?
Hmmm, seems that it may be getting set incorrectly. So you would have to define "cols" first, which would be what you are running the find all on. Depending what you are pointing too, it could differ. It could be a or a or even a . What this piece does it is groups all items of the html into the object "cols". Then you would have a nested loop "for col in cols:" which then is basically saying for each item of html found, do what is inside the loop.
Are you able to copy and paste into a comment the bit of code where you have set col? Could you also let me know what url and data you are trying to reach? If you don't want to paste it here, you can also reach me on Twitter/X @wageredOnTilt
What are you using so you can open webpage front end and back end at the same time next to each other (complete noob)
I am just using the inspect feature, which is in most browsers. So, if you are using Chrome, or Firefox, if you right click on the screen. The floating options will appear, and you click on inspect. That should open up the html on the screen.
can you show how to do this with prizepicks or sports books for play props?
Sorry, thought I replied to this. Doing prize picks or prop sites will require submitting log in info, and they actively change the site structure.
@@wageredontilt1649 no worries. I figured out a way awhile back ago.
Hello! first of all thank you for showing us how to do this. I followed your code down to the letter in an attempt to get the 2023 data from my csv list but I keep running into errors involving problems with playerlist['id"].iloc[x] and in__getitem__return self. I am wondering if there is a problem with my csv or that i dont have all the right packages installed.
Did you copy and pst the playerlist code? If so, you have a mismatch of “ and ‘. You’ll want to use one or the other.
Hey man. You mention Unabated and videos being on there but I don't see anything. All I see are calculators and odd machines that are already done.
You can find the videos in the education section of the site, or on their YT channel in the playlists
@wageredontilt1649 thanks so much man! Ive tried watching others but your method is so easy for me to understand. Appreciate the replies too!
Is there a way to get all the betting odds from different sports betting sites?
You can scrape from books that have exposed data. You may need to log into the book to scrape the data, but you would need to be sure their site html structure matches what’s in your code.
@@wageredontilt1649 I'm a coding newbie. Probably take me a while to figure this out, but maybe someone has already done this and has shared their code publicly.
What does 'exposed data' mean? Sorry, im completely new to this. @@wageredontilt1649
Great! T delivered once again
Glad you found it useful! If there is anything additional you’d like to see, let me know!
I ran the code it created the csv file but no info is in file. What am i doing wrong?
Likely it is point at a non-existing table, or has a bad ID. The other possibility is the URL built isn’t being found.
What was the url you had the scraper trying to reach and what was the table ID?