Hey Karston, Pule here, I just wanted to say thank you for the content because I bet you are gonna gain a big following which will make it extremely hard to actually send a semi personal message like this 😅😅😅 I love you strategies, I literally had to scold myself to pick one and apply because they are all really great ideas. I honestly believe you are the best hidden but growing thing on RUclips right now more especially for the make money online crowd. I hope you see this because I wrote it to give you your flowers 💐 🙂
Hey @karston so i have been facing an issue iam hoping u would help with ... So the prblm is that with a certain websites u have only 20 search pages limitation so my question is how can i bypass that please???
Hey Karston Amazing video as always. I want to start SMS outreach for Realtors but I can’t because I don’t know how to do it legal. It would mean the worlds if you could record a video on how to send SMS legally! Cheers
Thank you! I completed the first part and was able to paste the results into Excel however none of the hyperlinks showed. It's just regular text with the websites URL. Is there an easy way to make the website URL I hyperlink automatically?
Great video! I heard you mention you can have upto 10 task using Octoparse are you referring to 10 runs on the free version total or 10 task at a time? Thanks for the video
What about a single tooltip element where the text changes depending on where you're hovering your mouse? I can't get it to automatically loop through different mouse positions & extract the changing tooltip text. I can only get it to extract the text from the last place I hovered my mouse. Thanks!
Its not bad for those who can't code. But it hast some problems. 1.) you can only scape 10k sites at once - when it comes to e-commerce its not much for a site , the can have 40k productsites and more with ease. 2.) it is too fast, you will run in ip-bans and other troubles. you have no option to add random time delays or rangom user-agent changes for e.g. I created myself a pyhton scraper where i only need to change the config and the skript is ready for the next page. on my mini-pc i can run dozenz of those scrapers over night (takes this time and more because of the things i mentioned in 2.)) and have vaild data with minimm risk of bans. This works fort stativ pages with bs4 or selenium for dynamic pages.
hi, could you help to check how to do it when the "next" button jump to the next 10? like the next button on the web does not go to page 2,3,4 but 11,21,31 etc. really appreciate!
Great video!! I get this message: "Table not changed. If the last page was not reached, try to increase crawl delay." What do I do, if the page only has one button that says "Show more"?
great video Karston, I'm trying to figure out a way to scrape from a data report that's a pdf. Can I highlight specifics or direct the data where I want it to scrap to (like a table graphic, or something) And can this be automated when the pdf/report releases? I'm thinking about a report about the neighborhood that comes out each month, and also about an events page so I can put this data in my own formats for email newsletter
How can you filter information when webscraping? I'm trying to make a tool to webscrape some information but I only want a list of still ongoing things not closed cases, how could i do that?
You can filter information while web scraping by using specific conditions in your code. For example, look for keywords like 'ongoing' or 'active' in the HTML elements. If the website has a clear indicator for closed cases, you can exclude those by checking the text or class names associated with closed cases. Using libraries like BeautifulSoup with Python, you can easily navigate and filter the data you need. Good luck with your tool
In Python, you can handle pagination without a 'next page' link by iterating through the numbered pages. Use a loop to construct URLs for each page number, then extract data until you reach the last page. Libraries like BeautifulSoup and requests make this process easier!
does anyone know how to remove certain data entry's from being extracted? Im finding that when I click more than 1 field entry to extra it automatically extracts all of the fields in the list and Im not sure how to keep it so that I am able to pick and chose which data entries to select from or at least remove the ones I dont want extracted
You can use conditions in your scraping code to specify which entries to exclude. Check the HTML structure for attributes or classes that identify unwanted data, then filter them out in your script
Yes, I can help. Try checking the format of your URLs or ensure they're properly linked. Also, verify that your Octoparse settings match those used by Karston. Sometimes, adjusting the scraping settings or using the correct data extraction method can help.
I have classes I want to take online, but they get sold out in seconds. We don’t know the time the classes are open. We only know the date. I do not want to be stalking the website to see when classes open. How can I get a sms message to tell me when a class opens? I believe these people are also doing that?
Yes, you can scrape public real estate listings and forums where buyers express interest. Also, check if real estate platforms have APIs for data access. Just be sure to follow their terms of service. Happy scraping!
Karston ur vids are literally saving me as a first time founder. Keep up the good work
Hey Karston, Pule here, I just wanted to say thank you for the content because I bet you are gonna gain a big following which will make it extremely hard to actually send a semi personal message like this 😅😅😅 I love you strategies, I literally had to scold myself to pick one and apply because they are all really great ideas. I honestly believe you are the best hidden but growing thing on RUclips right now more especially for the make money online crowd. I hope you see this because I wrote it to give you your flowers 💐 🙂
Does HasData offer an easy way to scrape Google Maps data without hitting IP bans? I've been curious about how it handles such large-scale scraping.
Awesome easy to follow video. Just what I needed. The next feature was a game-changer!!!
broooo I JUST FIGURED THIS OUT 2 WEEKS AGO, OMG, WHY CANT I HAVE AN ORIGINAL IDEA GODAMMMMMN
Ask the universe
Nothing is original everything is inspired
Why didn't you make a video like him?
Awesome! But would love a video on onboarding clients! Thanks so much!
I wish all the youtubers created videos like you do. Viola!
What a lovely VIDEO. YOU helped me SOOO much. I can't thank you enough. You made my day and I am so grateful. Thank you! :)
Good explanation, short and clear 👍
I wish you could upload everyday, I love your videos bro
Hey @karston so i have been facing an issue iam hoping u would help with ... So the prblm is that with a certain websites u have only 20 search pages limitation so my question is how can i bypass that please???
Hey Karston Amazing video as always.
I want to start SMS outreach for Realtors but I can’t because I don’t know how to do it legal.
It would mean the worlds if you could record a video on how to send SMS legally!
Cheers
#1 rule of andrew tate: dont get legal before you get rich
i dont think he meant to say do something illegal, he meant don't pay taxes don't have an llc etc..@@rodrigovalverdemelgar8752
Whenever I try to paste the instant data scraper links to octoparse the links are not in 1 line each and I cant run it properly
You nailed it bro
how to scrap pages that is redirecting to another page?
Awesome buddy......❤❤🎉🎉🎉❤ Please take my hug. Saves me.
Thank you my life saviour! God bless you and your family.
I'm looking to scrape videos and pdfs from a website is this possible?
Have you done a video for scraping emails as well?
Excellent work Kaston.... this might come handy..
Thank you! I completed the first part and was able to paste the results into Excel however none of the hyperlinks showed. It's just regular text with the websites URL. Is there an easy way to make the website URL I hyperlink automatically?
is there any chance to scrape the email adress out of the "contact me" button?
Great video! I heard you mention you can have upto 10 task using Octoparse are you referring to 10 runs on the free version total or 10 task at a time? Thanks for the video
Hi, how about scrapping constructions.
only genuine video on youtube
Is this applicable in europe?
Brilliant. Saved my day!
How can you choose in the html other infos to scrape? Especially product gtins etc.?
Do we need proxies or anything like that to do this on a mass scale?
Wow. Great job!
Followed this guide and worked effectively. Thanks!
Can this scrape dynamic like websites, like Walmart's?
how about if i want to save the scrapped data in a eg.postgres database?
can you do this on multiple websites to check on what apps they use?
Great I have a question there are some websites that it shows mobile and address and fax in the same column . How to get out of this
What about a single tooltip element where the text changes depending on where you're hovering your mouse? I can't get it to automatically loop through different mouse positions & extract the changing tooltip text. I can only get it to extract the text from the last place I hovered my mouse. Thanks!
Thanks Karston, you are great, saved my ton of time & money.
Mind blowing💨
Its not bad for those who can't code.
But it hast some problems.
1.) you can only scape 10k sites at once - when it comes to e-commerce its not much for a site , the can have 40k productsites and more with ease.
2.) it is too fast, you will run in ip-bans and other troubles. you have no option to add random time delays or rangom user-agent changes for e.g.
I created myself a pyhton scraper where i only need to change the config and the skript is ready for the next page.
on my mini-pc i can run dozenz of those scrapers over night (takes this time and more because of the things i mentioned in 2.)) and have vaild data with minimm risk of bans.
This works fort stativ pages with bs4 or selenium for dynamic pages.
How can we scrape the hidden text in a website? Like FAQ, when user click to the questions, the answers are displayed
Great job bro
hi, could you help to check how to do it when the "next" button jump to the next 10? like the next button on the web does not go to page 2,3,4 but 11,21,31 etc. really appreciate!
This is fire!!!!🔥
Great video!! I get this message: "Table not changed. If the last page was not reached, try to increase crawl delay."
What do I do, if the page only has one button that says "Show more"?
Just try to click that button with the extension and also increase claw delay by a few seconds
Will octoparse work even when the page we want to scrape requires logging in?
what about if the website is asking for credentials to login...
Thank you so much , this will save me a lot of time ❤
Great video, thanks for sharing it
Why is my Instant Data scraper doesnt have copy all?
Could you please help me with Shadow DOM ? can we scrape the links which in at shadow DOM tree?
great video Karston, I'm trying to figure out a way to scrape from a data report that's a pdf. Can I highlight specifics or direct the data where I want it to scrap to (like a table graphic, or something) And can this be automated when the pdf/report releases? I'm thinking about a report about the neighborhood that comes out each month, and also about an events page so I can put this data in my own formats for email newsletter
Great question
Why no response I wonder
One video on Skool group members data scraping with there LinkedIn
Nice 🎉🎉
Hi you are good can you tell me how I easy made a documentation. A Company hav sent RSS to my site and I want it back
From Dayton, thought I getting tricked when I saw that in the start of the video!
Please can you give this scraper link to download
I can't find octaparse extension in chrome?
can you it but this time extracting their social media sites
Brilliant ❤
How can you filter information when webscraping? I'm trying to make a tool to webscrape some information but I only want a list of still ongoing things not closed cases, how could i do that?
You can filter information while web scraping by using specific conditions in your code. For example, look for keywords like 'ongoing' or 'active' in the HTML elements. If the website has a clear indicator for closed cases, you can exclude those by checking the text or class names associated with closed cases. Using libraries like BeautifulSoup with Python, you can easily navigate and filter the data you need. Good luck with your tool
How do you set up the scrape next page if there is no next page? Only numbers and skip to last??
Thank you
In Python, you can handle pagination without a 'next page' link by iterating through the numbered pages. Use a loop to construct URLs for each page number, then extract data until you reach the last page. Libraries like BeautifulSoup and requests make this process easier!
@@webscrapingseniors thank you very much. I really appreciate your help 👌🏼
How can i scrape AirBnB?
does anyone know how to remove certain data entry's from being extracted? Im finding that when I click more than 1 field entry to extra it automatically extracts all of the fields in the list and Im not sure how to keep it so that I am able to pick and chose which data entries to select from or at least remove the ones I dont want extracted
You can use conditions in your scraping code to specify which entries to exclude. Check the HTML structure for attributes or classes that identify unwanted data, then filter them out in your script
Really Good thank you
A big THANK YOU
Quality video
does anyone know when I paste the URLS on octoparse it doesn't render up the URLs like Karston's?
Yes, I can help. Try checking the format of your URLs or ensure they're properly linked. Also, verify that your Octoparse settings match those used by Karston. Sometimes, adjusting the scraping settings or using the correct data extraction method can help.
Bro how are you not MASSIVE yet??? Good lord! lol
The best
Can't find the extension🥺
me too😔
Thank you.
If we scrape all this data what do we do with it then
Train machine learning models, comparing prices. Idk many uses.
New to this but is it possible to Scrape from landing pages ??
Just the home page you mean? I don't think that's possible
Super Thanks 😊
gotit
Legend, my nigga
Youre a g
Me who use beautiful soup and selenium 😅
Coding will get you further than just using other's tools ;-)
helpful
I have classes I want to take online, but they get sold out in seconds. We don’t know the time the classes are open. We only know the date. I do not want to be stalking the website to see when classes open. How can I get a sms message to tell me when a class opens? I believe these people are also doing that?
legally?
Yes it’s legal lol
Is there a way to scape people looking to buy real estate
Yes, you can scrape public real estate listings and forums where buyers express interest. Also, check if real estate platforms have APIs for data access. Just be sure to follow their terms of service. Happy scraping!
@@webscrapingseniors I’ll try that. Thanks
I thought, we need to write a code to scrap data from websites 😅
For difficult cases we still do ;-)
5:06
The author is just another penny-earner on RUclips. What a shame.