Industrial-scale Web Scraping with AI & Proxy Networks

Поделиться
HTML-код
  • Опубликовано: 17 дек 2024

Комментарии • 641

  • @beyondfireship
    @beyondfireship  Год назад +143

    Use this link to get a $10 credit, enough cash to scrape thousands of pages get.brightdata.com/fireship

    • @ItsDeanDavis
      @ItsDeanDavis Год назад +3

    • @Reddblue
      @Reddblue Год назад +28

      This man selling wood and iron to shovel makers

    • @anze
      @anze Год назад +6

      @beyondfireship ad link doesnt work

    • @NoahKalson
      @NoahKalson Год назад +1

      ​@@anze worked for me. Try now.

    • @tamasmajer
      @tamasmajer Год назад +5

      The pricing page says 20$/GB. I checked how big the pricing page was it loaded 4MB, so then it costs 20$ for 250 pages? That seems very expensive. Or how should i calculate the price?

  • @rvft
    @rvft Год назад +1277

    I like how he didn't use "cheap" during the entire video because my god the pricing is absolutely madness on the advertised product

    • @brunopanizzi
      @brunopanizzi Год назад +195

      Industrial scale!!!

    • @koba2160
      @koba2160 Год назад +69

      scraping aint cheap, but theres many ways to make it much cheaper

    • @mrgyani
      @mrgyani Год назад +32

      ​@@arteuspw what do you mean by 1gb/$1? You mean browsing 1gb of data for a dollar with a single proxy?
      How many proxies do you get for $1?

    • @ИмяФамилия-ч3и6щ
      @ИмяФамилия-ч3и6щ Год назад +30

      @@arteuspw Please tell me where to buy them at this price.

    • @mantas9827
      @mantas9827 Год назад +15

      Is 20$ per GB considered expensive? I wonder how much could you scrape from a site like amazon for that GB... surely a lot ?

  • @albiceleste101
    @albiceleste101 Год назад +815

    As a freelance dev I get contacted all the time for scraping, it's definitely one of the most requested along with Wordpress (which I also dont work with)

    • @cymaked
      @cymaked Год назад +60

      interesting - 8 years of freelancing and never had one such request 😮

    • @dinoscheidt
      @dinoscheidt Год назад +189

      And with a freelancer, the business has the advantage that YOU break the terms and conditions of the companies you scrape (are legally liable and suable). Not the business 😊 so a cheap code monkey and legal scape goat all in one 💪

    • @mrgyani
      @mrgyani Год назад +3

      Where do you get these projects from?

    • @VividCoding
      @VividCoding Год назад +3

      @@dinoscheidt Wait can they really do that? They are the ones who wanted to scrape the data in the first place.

    • @dabbopabblo
      @dabbopabblo Год назад +78

      I'm not even a freelancer and I cant count the number of times on two hands Ive been asked to make someone a website. They think because I'm a web developer I am just some guy who goes around making websites willy nilly. And the few times I have actually went through with helping someone out, they want everything Wix or Wordpress provides and have the audacity to suggest I shouldn't be asking so much in pay when a drag and drop builder can suffice.. THEN USE THE BUILDER GOD DAMMIT. My knowledge is wasted on front end work anyways.

  • @alexcasillas2488
    @alexcasillas2488 Год назад +94

    This reminds me of when I solved 100 captchas manually so that I could download some data files from a website for an ai. I got a sever message temporarily banning me from the website saying that I must be a bot. I learned my lesson and stuck to only solving 99 captchas each day from then on until I had enough data files

  • @Loubensdoriscar
    @Loubensdoriscar 11 месяцев назад +4

    Zeus Proxy's specific emphasis on session management is a key factor that resonates with my goal of executing data retrieval tasks with a focus on mimicking genuine user behaviors.

  • @yashkhd1100
    @yashkhd1100 Год назад +52

    To be frank out of all youtubers Fireship has most interesting and to the point videos and gives most value out of time spend. Kind of just wondering how he keeps track of all the varied topics and able to make most out of it.

    • @julienwickramatunga7338
      @julienwickramatunga7338 Год назад +12

      He already has five prototypes of Neuralink chips plugged into his brain, linked to the Web via 5G, and he is using digital clones of himself (coded in JS of course) to make more video content (with the help of ChatGPT).
      That makes him the most powerful being on the planet.
      Praise the Cyber-Jeff! 👾

    • @AdamBechtol
      @AdamBechtol 9 месяцев назад

      Mmm

    • @clerpington_the_fifth
      @clerpington_the_fifth 4 месяца назад

      well thankfully there's this thing called a search button

  • @YuriG03042
    @YuriG03042 Год назад +79

    toward the end of the video, Jeff suggests that you can grab all the links and then make requests to those links. it gave me flashbacks of another video on the main channel where a company did this and ended up with a 70k+ GCP bill after one night of web scraping, because their computing instance was forever recursing and was scalable up to 1000 instances lmao

  • @Maneki-Nico
    @Maneki-Nico Год назад +11

    Your videos are somehow exactly relevant to the code I am writing every week - interesting for sure!

  • @wtfdoiputhere
    @wtfdoiputhere Год назад +73

    Web scraping is still my favourite type of projects it's so fun and "meaningful" to me and with the help of AI i can see it becoming much much easier

    • @schizotypical
      @schizotypical Год назад +12

      same, gives me shitton of satisfaction

    • @GeekProdigyGuy
      @GeekProdigyGuy Год назад +2

      thanks Jesus

    • @alejandroarango8227
      @alejandroarango8227 Год назад +1

      Unfortunately GPT4 is still too expensive to use in projects and gpt3.5 is still too stupid.

    • @wtfdoiputhere
      @wtfdoiputhere Год назад +3

      @@alejandroarango8227 it's stupid enough so you still do much of the work yourself cause eventually it's just a tool to help and personally it helps me enough

    • @wtfdoiputhere
      @wtfdoiputhere Год назад

      @@schizotypical exactly what i feel

  • @BharadwajGiridhar
    @BharadwajGiridhar Год назад +20

    One thing jeff is that these websites change css class names on every refresh. So it's better to write code with selectors that don't change like id or aria label.

  • @meansnada
    @meansnada Год назад +216

    I love how there are legit businesses to bypass captchas and mess up with data :)

    • @dislike__button
      @dislike__button Год назад +21

      Scraping isn't illegal

    • @Tylersmodding
      @Tylersmodding Год назад +2

      and individuals

    • @aresakmalcus6578
      @aresakmalcus6578 Год назад +1

      @@dislike__button if it's against Terms of Service of the given site, it is

    • @Bruceylancer
      @Bruceylancer Год назад +28

      @@aresakmalcus6578 I'm not a lawyer, but how can it possibly be illegal? It can be against ToS, sure, then the website owners can surely act accordingly, i.e. ban your account on the said website, ban your IP address, and so on. But illegal? Are there any laws out there that prohibit collecting public data? Are there any cases of people getting sued for scraping? I haven't heard of such, maybe you can provide some examples. Also, there are 8-figure businesses built on scraping, like Ahrefs or Semrush.

    • @Bruceylancer
      @Bruceylancer Год назад +2

      @@Andrew-zy7jz Exactly! Very good example.

  • @Wei-KuoLi
    @Wei-KuoLi Год назад +2

    Thank you for teaching me puppeteer and bright data, beats all content on internet

  • @EliteGamerpk
    @EliteGamerpk Год назад +210

    As a web scraping tool developer, one thing to note about the chatGPT code about extracting product names etc is that it's not going to work on all cases. What I mean by that is we can see there are some random class names like '._cDEzb'. And these classes can vary from page to page. So your code for one listing page, might not work for other. The way I do this is using some advanced query selectors that don't rely on unreliable classes. Can go into more detail if required.

    • @CrackedPlayz
      @CrackedPlayz Год назад +16

      Please do!

    • @RiChYFanatics
      @RiChYFanatics Год назад +13

      Dont be shy :p

    • @myhitltd5826
      @myhitltd5826 Год назад +5

      so that's why I copy full selector of the element and work with it in puppeteer.

    • @MrNsaysHi
      @MrNsaysHi Год назад +3

      AFAIK puppeteer doesn't support finding elements by xpath, so what do guys use?

    • @thrand
      @thrand Год назад +2

      @@MrNsaysHi well, real men write their own html parser and query language. But peasants like myself use css selectors with document.querySelectorAll.

  • @felixmildon690
    @felixmildon690 Год назад +10

    Tutorial starts at 2:15

  • @abz4852
    @abz4852 Год назад +4

    fireship you are uploading videos faster than new javascript frameworks get released

  • @Jeanseb23
    @Jeanseb23 Год назад +4

    You've foiled my plan 5 years in the making. At least now I have a free 10$ credit for Brightdata to catch up. Thanks Fireship!

    • @gatonegro187
      @gatonegro187 9 месяцев назад

      how much did u end up spending

  • @shawnvirdree8593
    @shawnvirdree8593 Год назад +3

    Wow, you’re on the cutting edge of technology 🤯

  • @xanderbarkhatov
    @xanderbarkhatov Год назад +120

    If I'm not mistaken, page.waitForSelector(selector) already returns the element handle, so you don't need to use page.$(selector) after that.
    Anyway, great video, as always.
    Thank you! ❤

    • @yvanguemkam4739
      @yvanguemkam4739 Год назад +6

      You're right, wanted to said that... But don't have money to spend on the browser. Is there an alternative?

    • @cyberzjeh
      @cyberzjeh Год назад +8

      ​@@yvanguemkam4739 you can host puppeteer yourself and pay for a proxy service if you need it, might come out cheaper (but more work obviously)

  • @DanielLavedoniodeLima_DLL
    @DanielLavedoniodeLima_DLL Год назад +13

    I remembered that web scrapping was a nightmare to deal with, specially doing this proxy rotation by ourselves. This tool is not cheap, though, so at least here in Brazil (and other emerging countries alike), companies will still be doing that like the old days. The captcha solving was actually done by real people at the time I worked in a company that mined those kind of data a few years ago, but I guess this can be automated with GPT-4 tools now

  • @Ruf4eg
    @Ruf4eg Год назад +2

    Man, you are reading my thoughts! this video came at the right time when I wanted to scrape some websites!!!!

  • @beefykenny
    @beefykenny Год назад +2

    This video
    has a lot of value.

  • @ikedacripps
    @ikedacripps Год назад +8

    When I first saw puppeteer when I was learning nodejs this is exactly the kind of use case I wanted to apply it to. Specifically wanted to scrape csv files and have some AI learn it and make some sense out of it. I think it’s now more than possible

    • @DemPilafian
      @DemPilafian Год назад +13

      Downloading CSV files would typically not be considered _"scraping"._ You don't have to scrape the data out of a CSV file -- it's already data.

    • @ikedacripps
      @ikedacripps Год назад

      @@DemPilafian you just wanna falsify my statement but scraping for csv file is as valid as scraping for pdf files. I specifically wanted to scrape soccer analytics websites for those csv files. Hope that puts it into perspective for you .

  • @Victor4X
    @Victor4X Год назад +39

    Stuff isn't censored properly at 3:00
    But I assume those creds are temporary anyway

    • @cymaked
      @cymaked Год назад +5

      theres many videos on Fireship where he jokes about living dangerously and letting the cred be seen 😂 obv temp stuff

    • @thie9781
      @thie9781 Год назад +2

      ​@@cymaked or just F12 to let somebody waste their time

  • @selimachour
    @selimachour Год назад +4

    I usually block the fetching of images, css, fonts (and javascript if the website can run without) which speeds up the page load by a lot!

  • @Autoscraping
    @Autoscraping 11 месяцев назад +6

    An extraordinary piece of video material that has proven highly useful for our new team members. Your generosity is immensely appreciated!

  • @aseluxestays
    @aseluxestays Год назад +1

    I'm here because I need to hire someone who can provide this service for me. Great video!

    • @TheHassoun9
      @TheHassoun9 Год назад

      Hi I'm willing to help# I'm a dev looking for commission

  • @kevinbatdorf
    @kevinbatdorf Год назад +14

    some of those query selectors look like they’d break in a week. Maybe you need to add openai to the workflow more directly

    • @RichardHarlos
      @RichardHarlos Год назад +3

      It's a proof of concept/tutorial, not an explicit recommendation for bulletproof boilerplate. Context, eh? :)

    • @yellowboat8773
      @yellowboat8773 Год назад

      Maybe outputting the html every time to openai then having that pick the query selector then insert into the script. Do have to be very specific with your prompt because it often replies with: The query selector is: a.carousel

  • @prabhavkhera4959
    @prabhavkhera4959 Год назад +14

    Thanks Jeff. I was planning on building a project that uses web scraping and this video absolutely dropped at the perfect time. Appreciate it. I love your videos and hope for more such content in the future :)

  • @classmanOfficial
    @classmanOfficial Год назад +4

    Selenium has a headless mode :) if you guys want to try it out, works well enough for multithreading

  • @bossdaily5575
    @bossdaily5575 Год назад +28

    Virgin API users vs Chad Web scrapers

  • @maxivy
    @maxivy Год назад +24

    Awesome video - I will have to rewrite it in Python though ;) because I am a human bean

    • @NicolaiWeitkemper
      @NicolaiWeitkemper Год назад +4

      BeautifulSoup is better anyways :P

    • @priapulida
      @priapulida Год назад +1

      @@danielsan901998 or Pyppeteer

    • @NicolaiWeitkemper
      @NicolaiWeitkemper Год назад

      @@danielsan901998 Correct, that's not an even comparison. However: BeautifulSoup >> Cheerio

  • @VaibhavShewale
    @VaibhavShewale Год назад +3

    damn that was really amazing, i was actually thinking of taking snippet of the page extract data then delte that page and repeat

  • @blaizeW
    @blaizeW Год назад +3

    Another gold gem for daddy fireship 🤑🔥

  • @abishekbaiju1705
    @abishekbaiju1705 Год назад +1

    Thanks for making this video. I am actually working on a project where the users can add amazon products and look for price changes and also get notified with price changes. My objective was to learn web scraping.

  • @tioModz-w6i
    @tioModz-w6i 3 месяца назад

    Спасибо автору за новую связку. Проверил, всё работает.

  • @NathanDodson
    @NathanDodson Год назад +151

    See. This is why I watch all your videos, Jeff. I'm a super shit JS coder, but I'm pretty decent with Python. This gives me an idea for my own eBay business, and scouring those tool docs for Python SDKs to do the same thing. Honestly, it's been your videos that have kept me in the coding space. You always have these creative "concept/idea" videos and a good majority of them have me opening up VSC to do some tinkering. Thanks for all your content brother.

  • @EuricoAbel
    @EuricoAbel 9 месяцев назад +1

    Incorporating Zeus Proxy into your SEO strategy ensures efficient and effective monitoring and data gathering processes.

  • @danieldosen5260
    @danieldosen5260 Год назад

    I never thought of returning data as JSON... that's obvious and brilliant...

  • @senorclouds
    @senorclouds Год назад

    This is a great spell for Howarts Ai Academy, Thanks Professor Fireship ^^

  • @boba-b5n
    @boba-b5n 3 месяца назад

    Огромное спасибо за рабочую связку.

  • @nichtolarchotolok
    @nichtolarchotolok Год назад

    Been using puppeteer for a few yrs for freelance web scraping. Puppeteer and Playwright have been a saving grace in many circumstances.

    • @donirahmatiana8675
      @donirahmatiana8675 Год назад

      could you give some tips to not getting ip banned?

    • @nichtolarchotolok
      @nichtolarchotolok Год назад

      @@donirahmatiana8675 puppeteer-extra library and the puppeteer-extra-stealth plugin. If that doesnt work, you'd need rotating proxy like that of bright data as mentioned in the video.

    • @jacekpaczos3012
      @jacekpaczos3012 Год назад

      @@nichtolarchotolok are you not using scrapy? I always thought of scrapy as the most convenient solution.

    • @nichtolarchotolok
      @nichtolarchotolok Год назад

      @@jacekpaczos3012 I started off on the nodejs route and havent had the need to try the python way of doing this. I do remember trying scrapy in my early days but for some reason puppeteer felt more intuitive to me. That is probably because I felt more comfortable writing javascript code.

    • @kellymcdonald7095
      @kellymcdonald7095 5 месяцев назад

      I just saw a comment above saying clients request a web scrapping tool but if it's not legal to scrape the website then how do you take up freelance web scraping what if the guy uses the data and the company ur scrapping from finds out about it? will you not be in trouble or how does this work ?

  • @estebancordoba555
    @estebancordoba555 Год назад +2

    In my country, some products are more expensive than amazon, I built a scrapper to get the products and price with params as the brand or names but amazon blocked me couple of times, this si really nice solution!

  • @forbiddenera
    @forbiddenera Год назад +8

    Puppeteer is the source of non stop memory leak nightmares for me. Fortunately I got it down to under like 30mb a day but originally it was like 30mb per leak and like 250+mb a day leaked (and it was mostly only loading 2 pages back and forth)

    • @alejandroarango8227
      @alejandroarango8227 Год назад +3

      I avoid using it to the maximum, it is a waste of server resources.

    • @andy12379
      @andy12379 Год назад +1

      You could just close the browser and open a new one every time you use it to avoid memory leaks

  • @d3layd
    @d3layd Год назад +2

    Thank you for this! I used ChatGPT to write a puppeteer script for me the other day and it was fucking slick

  • @CODE_YOUR_TYPE
    @CODE_YOUR_TYPE 11 месяцев назад

    I love you man i was trying for so long and you are the only one who gave the solution thank you so much

  • @F1NEk
    @F1NEk Год назад +12

    you totally can see username and password at 3:01

  • @JoãoVitorDeSouzaSouza-b2e
    @JoãoVitorDeSouzaSouza-b2e 3 месяца назад

    Отличный вариант, попробовал, все работает, спасибо! ❤

  • @chaseclingman
    @chaseclingman Год назад +7

    I liked how you showed the timeout as 2 * 60 * 1000 so beginner friendly haha

    • @mrgalaxy396
      @mrgalaxy396 Год назад +16

      I mean that's way more readable than 1200000, this is a pretty common practice

    • @clerpington_the_fifth
      @clerpington_the_fifth 4 месяца назад

      @@mrgalaxy396 could also do 1_200_000 if it was python

  • @hermanplatou
    @hermanplatou Год назад +24

    Doesn't amazon rotate the classes and ids, effectively breaking your selectors?
    Not sure how the most advanced RPA bots work, but im hoping that some of them offer a AI that grabs screenshots and parses them instead. Would be interesting with a follow up!

    • @makkusu3866
      @makkusu3866 Год назад +4

      Yea, I think classes should be autogenerated, at least after every deployment if not every request. Fast and dirty solution would be to use openapi sdk to prompt ChatGPT to generate document query code and eval it

    • @trappedcat3615
      @trappedcat3615 Год назад +6

      @@makkusu3866 You can select elements based on attributes or lack of attributes, or you can use pseudo-classes such as :nth-of-type. There are dozens of them.

    • @iljazero
      @iljazero Год назад +2

      @@trappedcat3615 yea, that is how i wrote scraping for other website, i targeted div elements with style X which often ... doesn't change cuz... why ;D

    • @arthur0x2a
      @arthur0x2a Год назад +3

      Most websites with random ids/classes names still have a common and repetitive structure. Axios + Regex and you'll process ~10 time as much pages as puppeteer, with minimal bandwidth by default and simpler code. Just validate the output with a strict schema (as you always should) and you'll maybe have to update it once a year at most.
      Puppeteer's only real advantage is TLS fingerprint

    • @cyberzjeh
      @cyberzjeh Год назад

      ​@@arthur0x2a you can also use sg like cheerio as a middleground between an entire headless browser, and parsing html with fukken regex (chad move tho ngl)

  • @KabbalahredemptionBlogspot
    @KabbalahredemptionBlogspot Год назад

    OK that was way cooler than I thought

  • @Kevgas
    @Kevgas Год назад +3

    You should create a course on how to do this, Id pay for that!

  • @ehsanpo
    @ehsanpo Год назад

    web scraping with ruby and rails is one of the best ways

  • @kusztelson2947
    @kusztelson2947 Год назад +8

    One problem you may face is that class names used for selectors change over time as they are generated every time website is deployed breaking yor code.

    • @assmonkey9202
      @assmonkey9202 Год назад +1

      Reverse engineer algo for generating class names

  • @felixmildon690
    @felixmildon690 Год назад

    Best video yet thanks fireship. This will introduce me to puppeteer and the services BrightData offers (BrightDatas prices are a concern based on the comments section)

  • @maxivy
    @maxivy Год назад +8

    This may be the first time I actually was excited about a sponsored segment and will actually sign up for the product

  • @kinglane8634
    @kinglane8634 Год назад +9

    Thanks for always helping us devs keep out workflow clean and simple!!! If you plan on starting a subscription service I'd love to see what you're offering.

    • @trickster6254
      @trickster6254 Год назад +1

      He has got a website offering courses. I bought the Angular one myself and was really good.

  • @TheMalcolm_X
    @TheMalcolm_X Год назад

    This video felt like one giant sponsored ad.

  • @wlockuz4467
    @wlockuz4467 Год назад +28

    Remote browser as a service is actually a genius idea. Often times when you want to scrape at scale the most painful thing to do is hosting and using effective proxies.
    But with this you can literally leave the scraper running on your machine and let brightdata take care of the proxies. You don't even need good specs because the browser runs on a different server.

    • @quickkcare605
      @quickkcare605 Год назад

      Well thought!

    • @klapaucius515
      @klapaucius515 Год назад +7

      smells like ad

    • @wlockuz4467
      @wlockuz4467 Год назад

      @@klapaucius515 Do you mean that for my comment or the video?

    • @arrvee7249
      @arrvee7249 Год назад +6

      ikr, then you can just pay brightdata $10,000 and go on to make $52 for the data you've scraped.

  • @progamer1196
    @progamer1196 Год назад

    as soon as I saw the thumbnail I knew this was an ad for brightdata

  • @hamza-325
    @hamza-325 Год назад

    I worked for a digital shelf company that scrap the data from Amazon and more websites. They use many proxy services but one of the most expensive ones was BrightData, so the more experienced workers always instructed us to not use BrightData unless it is really necessary.

    • @sciencenerd8326
      @sciencenerd8326 Год назад

      what are the others that are better?

    • @hamza-325
      @hamza-325 Год назад

      @@sciencenerd8326 the company has made some cheap proxies using the machines of AWS for examples (they don't have many IPs but they do the job for many websites). And I think there are cheaper services like ProxyRack.

    • @fhnvcghj1587
      @fhnvcghj1587 Год назад

      ​@@hamza-325I have a task of selenium bot I have 1000 account but need 1 ip for each account to make request to the website and do the work any idea or paid service for that

  • @TPAKTOPsp
    @TPAKTOPsp Год назад +2

    Any reason why you have used puppeteer over playwright? I see bright data has support for both.

  • @NuncNuncNuncNunc
    @NuncNuncNuncNunc 11 месяцев назад +2

    5:10 Works until the generated class names change the next time site has a minor update.

  • @MrKrzysiek9991
    @MrKrzysiek9991 Год назад

    Microbots AI chrome extension helps with building prompt with HTML code included. Chech it out it you want to write automation code faster.

  • @kasparsc
    @kasparsc Год назад +1

    Sir, you are a legend 🔥🔥🔥

  • @rstar899
    @rstar899 Год назад

    Amazing video as always 🎉

  • @KhaledAlMola
    @KhaledAlMola Год назад

    That is a cool website to use. I'll try it one day

  • @ozten
    @ozten Год назад +12

    Those css selectors look super fragile.

    • @RichardHarlos
      @RichardHarlos Год назад +1

      It's a proof of concept/tutorial, not an explicit recommendation for bulletproof boilerplate. Context, eh? :)

  • @manfredcomplex366
    @manfredcomplex366 Год назад

    Freaking Money Glitch. Love you man❤

  • @harisonfekadu
    @harisonfekadu Год назад

    You're ingenuity is something else. It's devs like you that won't be replaced by AI.

  • @Jason-nv6ku
    @Jason-nv6ku Год назад

    You're amazing! Many thanks!

  • @AbuBakar-pc2fp
    @AbuBakar-pc2fp Год назад

    Awesome Explanation

  • @Xld3beats
    @Xld3beats Год назад +7

    Guess its time to write a program that applies to every job on the internet

  • @SpencerDwight
    @SpencerDwight Год назад +2

    Would it be possible to scrape base file types from a website to access their asset?
    For example; there's a T-shirt image that I want to save, but I can only save as a .avif file.
    Ideally, I'd be able to access the underlying file type (png/jpg) and save it in full resolution.
    If anyone has any feedback regarding if advanced web scraping can extract this, please lmk.

  • @daniamaya
    @daniamaya Год назад

    Gold. Just pure gold.

  • @v1s1v
    @v1s1v Год назад +1

    Nice tutorial, but there are AI tools now like Kadoa that can do all of this for you. In the time it takes for you to watch this video, you can get an AI scraper up and running.

  • @Dev-Siri
    @Dev-Siri Год назад +2

    just as I thought the ai videos ended

  • @wandenreich770
    @wandenreich770 Год назад +1

    Very insightful

  • @nskiran
    @nskiran Год назад

    We used to user selenium web driver ( webactions) and phantomjs to scrape data.
    Ip problems were solved with nohodo
    In good olden days 2014 stack

  •  Год назад +1

    Only for this topic alone its worth to learn python along with Scrapy

  • @kalelsoffspring
    @kalelsoffspring Год назад +2

    Presumably this can be used to DDoS as well, do you know if there are any protections in place or how blame is handled if someone does cause something like that? Like, Amazon will start giving 403s, does it automatically get a fresh clean IP? Those aren't infinite so I'm curious if you'd be charged for going through to many IPs at a particular service

    • @xetera
      @xetera Год назад

      bright data is insanely expensive so that's the protection against DDoS lol. You'll run out of money before you even have the chance to send enough traffic to cause a problem

  • @kevinbraga9526
    @kevinbraga9526 Год назад +1

    Great video, i have a question for you, how do you know that this is the industry standard for modern web scraping?
    Like how can you find out this information.

  • @stewmcminn8241
    @stewmcminn8241 2 месяца назад

    I would like to use puppeteer more but its painful to use. Trying to find the right selector that works with puppeteer is an art and time consuming. For example, I made a puppet that logs into a website then selects from a round wheel the time of day for a calendar/timer feature. The wheel didn't have a clear selector, so I had to trial and error mouse coordinates, and it was super wonky. Each time trial and erroring a click, I had to watch the code compile and the puppet log into the website before it got to the screen i was stuck on.
    I'd like to know if you have any tricks or tools aside from inspecting code in chrome. For instance, if there was a way I could dynamically write my puppeteer code as it is being tested, that would prevent me from having to re-compile and watch the puppet click through login pages etc to get to the point I want to test it each time I iterate a test.

  • @petrlaskevic1948
    @petrlaskevic1948 Год назад

    Do all search engines do it like this? I don't think that a website for searching furniture from my country bothered to talk to each one of the sellers and make an arrangement with them. Or did they?

  • @ShellyHernandez-x
    @ShellyHernandez-x 7 месяцев назад

    Wondering about Proxy-Store's scraping proxies' effectiveness? Saw them on Google, any experiences?

  • @LukasKlinzing
    @LukasKlinzing Год назад +1

    Came here for the "AI" and "industrial" for. Got basic scraping with no AI . yay

  • @test-rj2vl
    @test-rj2vl 4 месяца назад

    I wonder how google does web indexing? Is it also using those shady proxies or does it have some more legal way?

  • @exploringcrypto6609
    @exploringcrypto6609 Год назад +5

    Jeff how can you process data so fast?

  • @HackWare
    @HackWare Год назад +8

    we see password at 3:02 bro!!

  • @danvilela
    @danvilela Год назад

    Brooo, this is awesome!

  • @gorilaz0n
    @gorilaz0n Год назад +1

    Have you tried scraping Google? They caught on Selenium even when I rotated IP and proxy. I wonder if your code bypassed that

  • @bigplaneenergy8674
    @bigplaneenergy8674 Год назад +5

    @Beyond Fireship You left the username/passwords unblurred there for a second

  • @neonbyte1337
    @neonbyte1337 Год назад

    At 3:00 we can see your credentials without blur effect

  • @alihansencan394
    @alihansencan394 2 месяца назад

    Great information thank you !

  • @luxurycondobbmg
    @luxurycondobbmg Год назад +4

    I remember my first time scraping a website - except back then, we didn't have ChatGPT proompts to do it for us. We had to physically read the documentation and actually understand the code we wrote

  • @CandyLemon36
    @CandyLemon36 Год назад

    I'm impressed by the depth of this material. A book with corresponding themes was a key influence in my life. "AWS Unleashed: Mastering Amazon Web Services for Software Engineers" by Harrison Quill

  • @marouanenajime6936
    @marouanenajime6936 9 месяцев назад

    What does this particular data he scraped in this video is useful for? Isnt it just the list of the amazon best sellers that is listed in the page itself ?

  • @summonlucifer
    @summonlucifer Год назад

    If you use selenium to open a browser window you can easily scrape from any website

  • @sad_man_no_talent
    @sad_man_no_talent 10 месяцев назад

    My first time seeing a video that's beyond 100 seconds

  • @whatsup3519
    @whatsup3519 Год назад +1

    What are the future works related to computer science other than programming

    • @ZeonLP
      @ZeonLP Год назад

      Data science, machine learning, developing and deploying hardware, working with cloud systems / distributed computing, DevOps etc. Programming jobs aren’t going away anytime soon, though, since all of the above scientific fields have parts that are „programmable“.

  • @forbiddenera
    @forbiddenera Год назад

    ..while Puppeteer can run headless, you don't have to run it headless. It may still seem headless from what most might consider that term to mean but headless or not is a config option for Puppeteer, running with headless disabled can help beat bot detection sometimes.