Python Ebay Scraping Tutorial: Web scraping with Python and BeautifulSoup | Python projects

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 209

  • @MarkJohnson-qz9ys
    @MarkJohnson-qz9ys 4 года назад +10

    By far the best web scraping instructional video I've seen!

  • @RedEyedCoderClub
    @RedEyedCoderClub  3 года назад

    What video should I make next? Any suggestions? *Write me in comments!*
    Follow me @:
    Telegram: t.me/red_eyed_coder_club
    Twitter: twitter.com/CoderEyed
    Facebook: fb.me/redeyedcoderclub
    Help the channel grow! Please Like the video, Comment, SHARE & Subscribe!

  • @ugurdev
    @ugurdev 3 года назад

    You write code like a poet writes poems, much respect.

  • @Razilator
    @Razilator 3 года назад +2

    Спасибо, отличные ролики!

  • @vijaypratap4875
    @vijaypratap4875 4 года назад +1

    Your coding style shows that you're a very skilled developer ..... GREAT KNOWLEDGE...

  • @MehmetDoora
    @MehmetDoora 2 года назад

    i looked for videos like this a lot and i found your videos . these are perfect , thank you

  • @Alxndros01
    @Alxndros01 4 года назад +2

    I just started learning python, this has been a great tutorial and helped me to better understand both web scrapping and functional program. Please make more tutorials!

  • @hubertcombomarketing2693
    @hubertcombomarketing2693 4 года назад +2

    I really like the way You explain everything. It's very clear and precise. Thank You.

  • @Cyberbreaker3
    @Cyberbreaker3 Год назад

    Good video ❤, I have learn a lot from this video alone
    Please make video on how to scrape with email:pass in a txt file and get details like balance and co

  • @yomajo
    @yomajo 5 лет назад +5

    40:28 Oh my God. xD Indeed mate, indeed. Great coding skills! And nice tutorial!

    • @bobhrobor4654
      @bobhrobor4654 4 года назад

      **the Cheapest became the most POPULAR.... WOW... CANNOT BELIVE)) AND WE SPEND 1 HOUR TO get to same idea!!! NICEEEE**

    • @RedEyedCoderClub
      @RedEyedCoderClub  2 года назад

      Thanks for comment!

  • @Christopher-ew7jw
    @Christopher-ew7jw 4 года назад

    This was a really helpful tutorial. When I first learned python, I didn't understand web scraping. Recently I did a lot of web dev stuff, and I randomly happened upon your video. I think I'm going to do some web scraping now.

  • @misty_jeera
    @misty_jeera 3 года назад

    This is really awesome! I love how your code is so concise and logical. Thank you, helped a lot : )

  • @hansurmann
    @hansurmann 4 года назад +2

    Отличный ролик, но есть пару советов. Во-первых, цена не всегда фиксированная, а часто меняется в зависимости от вариации товара (вот пример www.ebay.com/itm/184168178192 - цена зависит от опции "AC Mains Voltage"). Во вторых, нужно сравнивать товары не про price, а по сумме price + shipping. Многие продавцы могут выставить цену $1, а доставку сделать $10. Более того, цена shipping зависит от страны из которой летит запрос в eBay. Если интересно, могу подкинуть несколько идей для следующих роликов.

  • @felixisnr1
    @felixisnr1 3 года назад

    thank you so much, i learned a lot here

  • @zhengzuo5118
    @zhengzuo5118 3 года назад

    Your code is very clean and easy to follow, keep up the good work

  • @rfbarrow
    @rfbarrow 3 года назад

    Thanks! A brilliant intro to web scraping for a beginner like me.

  • @joseleon9291
    @joseleon9291 3 года назад

    My final degree project is about this, amazing!

  • @slimp4ever
    @slimp4ever 5 лет назад +2

    I really enjoy you tutorials! keep on doing new ones!

    • @RedEyedCoderClub
      @RedEyedCoderClub  5 лет назад +1

      Thank you!
      Telegram bot tutorial is ready.
      ruclips.net/video/cX8m3sp_w84/видео.html

  • @vladlemos
    @vladlemos 4 года назад

    Thank you! Very clear the way you explain.

  • @Saab1100
    @Saab1100 4 года назад

    Very good, I am going to try a similiar project. Thanks.

  • @fahadmunir5426
    @fahadmunir5426 4 года назад +1

    The process is only scraping the data from the link provided but not the other index pages from pagination. How can I automate in order to get data from all the pages?

  • @64jayr
    @64jayr 3 года назад

    All was going well until I tried to do the "items sold." When I run, I get nothing returned. Can you please help me with this? I need these items, but unsure why it doesn't work. The rest was great.

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад

      The code you see in the Browser's Inspector is not the same you see with Requests library: *r.text*, because browsers execute JavaScript.
      So try to check the code you see with Requests (you can try to save the content of *r.text* in to a file) and check the save file with Inspector.
      If there is no need code, I suppose to use Selenium.

  • @stanysumanth8703
    @stanysumanth8703 3 года назад

    Tried following but ..i got unboundlocalerror . Local variable soup referenced before assignment..
    How to tackle this

  • @37even
    @37even 5 лет назад +1

    I followed every step on till 14:00, but I get a empty print from the script, but the script finished without error, something is wrong here. I can change URL to anything any page or even blank and the script finishes without error but empty data...

    • @RiteshYadav-rc1np
      @RiteshYadav-rc1np 4 года назад

      same error for me. have u solved yet?

    • @37even
      @37even 4 года назад +1

      @@RiteshYadav-rc1np nope...

    • @RiteshYadav-rc1np
      @RiteshYadav-rc1np 4 года назад

      @@37even i solved it if you want i can share

    • @37even
      @37even 4 года назад

      @@RiteshYadav-rc1np what would be really cool, thanks in advance!

    • @felix1672
      @felix1672 4 года назад

      ​@@RiteshYadav-rc1np hi could you also share the code with me - i have the same issue!

  • @anujlahoty8022
    @anujlahoty8022 5 лет назад

    Thanks for this tutorial.

  • @tawhidhasan7556
    @tawhidhasan7556 4 года назад

    Hey, I copied your code but when i run, it gives error. I can't find out why it gives error.
    Errors:
    Traceback (most recent call last):
    File "ebay.py", line 78, in
    main()
    File "ebay.py", line 74, in main
    write_csv(data, link)
    File "ebay.py", line 64, in write_csv
    writer.writerow(row)
    File "D:\Installation\Anaconda\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
    UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f381' in position 0: character maps to

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      Windows still can't into Unicode. Try to specify encoding explicitly:
      with open('output_file_name', 'w', newline='', encoding='utf-8') as ...

    • @tawhidhasan7556
      @tawhidhasan7556 4 года назад

      @@RedEyedCoderClub Thank you

  • @pro100_igor
    @pro100_igor 4 года назад +2

    Олег, приветствую!
    А чего под всевдонимом? Тренируешь Инглишь?
    И немного по тексту:
    метод get_text() имеет опции, такие как split и strip. В итоге код получается короче
    пример: soup.get_text(" ", strip=True)

  • @samswayze4143
    @samswayze4143 3 года назад

    Strange result... program writes the same listing 5 or 6 times in the CSV file before moving onto the next listing. Any idea why?

  • @sandykurnia7150
    @sandykurnia7150 3 года назад

    awesome thank you

  • @TheJarJarKinks
    @TheJarJarKinks 4 года назад

    Great video. Also, +1 for manjaro. Not the lightest distro, but it's one of my favorites.

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      Thank you. But I use Mint.

    • @TheJarJarKinks
      @TheJarJarKinks 4 года назад

      @@RedEyedCoderClub My mistake... but I also love Mint! It's running on my laptop I use for testing scripts. Great choice.

  • @carlosmatosfanpage2856
    @carlosmatosfanpage2856 5 лет назад +4

    Hello.
    I want to build a website that compares the price of a product across different websites. How do I display the data which I have scraped on my website?

  • @theadrix92
    @theadrix92 4 года назад

    super! thank you very much

  • @waffles6555
    @waffles6555 3 года назад

    Hello. I have my code copied as you have written, but when I go to print h1 the command window displays None. Any suggestions? Good tutorial regardless!

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад

      check the html code you got from Requests.
      >>> print(r.text)
      or save it to an .html file, then open in a browser and examine the code with Inspector.

  • @langer9436
    @langer9436 4 года назад

    Why do i get as a seller on ebay a 405 false message? I try to copy automaticly my selled items (name of article) with a item number in the discription if the item in a excel. it wont work... :(

  • @2ncielkrommalzeme210
    @2ncielkrommalzeme210 Год назад

    in your code on video in the first if statement after you are using else i write it as yours but wehen i run it it gave invalid syntax error . before else you used print function between if and else if you can answer to me i will be glad.

  • @24u83qyui3yr8932yi3q
    @24u83qyui3yr8932yi3q 4 года назад +2

    How are you looping through pages of a listing? How do you know when to stop adding +1 to pgn?

  • @singular23
    @singular23 4 года назад

    9:00 Getting en error at soup = BeautifulSoup(response.text,'lxml')
    this gives:
    File "/home/user/anaconda3/lib/python3.7/site-packages/bs4/builder/_lxml.py", line 128, in __init__
    super(LXMLTreeBuilderForXML, self).__init__(**kwargs)
    TypeError: super(type, obj): obj must be an instance or subtype of type
    Any idea what to do :-(?

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      Look for the part of the traceback regarded to your own code. You can do it by the paths of the modules.

  • @DeepDiveinUniverse
    @DeepDiveinUniverse 4 года назад

    Fantastic !! Keep it up....

  • @aidangasim
    @aidangasim 4 года назад

    hi,when i run in the first stage for getting a class,it writes that theres no parser in your computer?what does that mean?how can i tackle with this,,help pls(((

  • @ghenagolovatic7340
    @ghenagolovatic7340 2 года назад

    Hi , i am a beginner and i start to follow your steps , but when i run it nothing is displayed in console , no issues also no information about the header or any info from ebay , can be the issue in ebay website , maybe they secure info ? Thanks

    • @RedEyedCoderClub
      @RedEyedCoderClub  2 года назад +1

      If nothing happens, it means that you forgot something - to return a value or to call a function, for example.
      If there is some issues with Ebay or your code - Python will raise an exception indicating the problem.
      So, please, check your code

    • @ghenagolovatic7340
      @ghenagolovatic7340 2 года назад

      @@RedEyedCoderClub tthank you!

  • @DrunkenKnight71
    @DrunkenKnight71 4 года назад

    thank you, i learned a lot

  • @barzhikevil6873
    @barzhikevil6873 4 года назад

    Thanks dude, I found it useful! The fact that Ebay formatting is so irregular kind of annoys me lol

  • @13justiny
    @13justiny 3 года назад

    I'm trying to do this for sold listings, but sometimes I get an empty list, sometimes I don't. I'm assuming Ebay is trying to prevent scraping sold listings?

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад

      IMO if Ebay will prevent you from scraping its pages it would hardly possible to scraping anything.

    • @justinyang3655
      @justinyang3655 3 года назад

      @@RedEyedCoderClub Looks like ebay adds a captcha to prevent scraping on sold listings. Probably why they restricted the api for it as well.

    • @64jayr
      @64jayr 3 года назад

      I got stuck here, too. I really need that sold number. Is there any workaround?

  • @akamalov
    @akamalov 5 лет назад

    Spasibo bol'shoe za tutorial.

  • @bobhrobor4654
    @bobhrobor4654 4 года назад

    **the Cheapest became the most POPULAR.... WOW... CANNOT BELIVE)) AND WE SPEND 1 HOUR TO get to same idea!!! NICEEEE** 11123

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      Sorry, but you didn't get the idea of the video.

  • @francescowang
    @francescowang 3 года назад +1

    Details about Military Leather Stainless Steel Quartz Analog Army Men's Cute Wrist Watches
    I can't find data-mtdes

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад

      Probably there is no this attributes. Check the HTML code you got with Requests library (it's the code not changed by a JS code).

  • @KingQAT
    @KingQAT 4 года назад

    I am using the US server and following your code while fixing the things that are different. But still I cant extract the hrefs. Please help

  • @alexanderscott2456
    @alexanderscott2456 4 года назад

    THANK YOU!
    28:30 - I have been stuck on this particular problem for almost two days. I didn't realize that you need to use list comprehension to unpack find_all(). Appending wouldn't work and I was just looking for any video that might give the answer.

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      I used the list comprehension just for the sake of brevity.
      It's equivalent to:
      urls = []
      for item in links:
      urls.append(item.get('href'))
      And I assume that the links is not None.
      If you got exceptions, you have to figure out what's wrong with your data.

    • @alexanderscott2456
      @alexanderscott2456 4 года назад

      I was using for item in links: item.get('href), that didn't work. When using [item.get('href') for item in links], it did work.

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      it's weird

  • @user-lh4hv3tx8b
    @user-lh4hv3tx8b 4 года назад

    I'm curious. Why use 'if not' instead of 'if' when printing if not response. ok:
    ? Thank you

  • @mohdsalmanansari5992
    @mohdsalmanansari5992 4 года назад

    You got good voice ❤️

  • @rajiv-kc
    @rajiv-kc 4 года назад

    I have followed your code exactly the way you worked in the video but in my output(csv) file, I am getting null for everything except for links column. Would there be a way you could take a look on my code?
    Thanks!

  • @flioink
    @flioink 3 года назад

    Having trouble with fetching the product name, seems like the format has changed.
    I sorta fixed it with:
    title = soup.find("h1", id="itemTitle").text
    title = title.lstrip("Details about \xa0")

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад +1

      It's great!

    • @64jayr
      @64jayr 3 года назад

      Hi, this is a lifesaver - I was struggling. Can you tell me how you fixed it and came to this conclusion?

  • @johnmclaurin2356
    @johnmclaurin2356 4 года назад

    How do I do this with another ebay link and what happens if there is more than one page?

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +1

      The core idea is the same. You have to get a link to the next page, and make a request to it...
      In most cases there'll be a pagination bar at the bottom of a page. Look at the links of other pages and you'll probably notice the difference between them. They all will differ in one parameter (page number).
      In this case each page has the _pgn= parameter, and the value of the parameter is a number of a page.
      By changing the value of the _pgn parameter you'll get the next page.
      Consider to watch my last video - there is a useful trick to get all pages.
      ruclips.net/video/3fcKKZMFbyA/видео.html

  • @shounak15
    @shounak15 3 года назад

    Good code.. everything worked perfect except page iterations and table title / headers in csv file.
    For table titles, I will have to take CSV creation outside of the loop and only append method goes in loop.
    For page iterations, I observed something wierd. This code is not supposed to go to page 2, however my code went half through page 2. I need to update it to scrap all the pages.

  • @BeattapeFactory
    @BeattapeFactory 4 года назад

    wow thanks a lot!

  • @langer9436
    @langer9436 4 года назад

    Hello, how can i connect with a proxy to do this? Why recognize ebay me as a bot? Thank you for your help.

    • @RedEyedCoderClub
      @RedEyedCoderClub  2 года назад

      Yes, you can use proxies. I don't know why RUclips knows that you are a bot

  • @anja1303
    @anja1303 4 года назад

    I have a question, I just did your tutorial (first of all - thank you! it helped me a lot!) but with my url the web scraper doesn't go forward to site 2 and 3 and so on... Did I miss that part?

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      "web scraper doesn't go forward to site 2 and 3 and so on"
      What do you mean?

    • @anja1303
      @anja1303 4 года назад

      @@RedEyedCoderClub at minute 25:30 you changed the url in the script that you only have to change the last part from the url (pgn=1, pgn=2 and so on) to go trough all pages to scrape them or did I get this wrong?

    • @anja1303
      @anja1303 4 года назад

      and in your Code I cannot find such a function, to scrape all the products from page 1, then going to the second page and scrape all products there and so on

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +1

      I scrape only the first page. Other pages you can get by incrementing png= parameter in the URL

  • @kaitsomething343
    @kaitsomething343 5 лет назад +8

    Man you could make a lot of money doing ASMR

  • @anthonybryanmonteveros281
    @anthonybryanmonteveros281 4 года назад

    how about the code for quantity available? can you help me please? i dont know if it is span or what class or id.. please help

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      span is
      id is id="smth"
      class is class="smth"
      span tag with id and class is:
      ...
      etc.
      To scrape successfully you need to know basics of HTML and CSS

    • @anthonybryanmonteveros281
      @anthonybryanmonteveros281 4 года назад

      @@RedEyedCoderClub thank you for the response, I already solved my problem, now I can't extract all the links to this URL www.ebay.com/sh/research?dayRange=30&endDate=1591483242640&format=FIXED_PRICE&keywords=Don+C%09Cal+Ripken+Jr.++project+2020&marketplace=EBAY-US&queryCondition=AND&startDate=1588977642640&tabName=ACTIVE I need all the datas there but i can scrape one at a time . is it possible to get all the datas like in the video with that link?

  • @datascience4952
    @datascience4952 4 года назад

    Red, Is there any way to automate create a listing on eBay by Python?

  • @aleksandara.9728
    @aleksandara.9728 4 года назад

    Hi Oleg, thank you very much for the tutorial! I have a question. 27:36, when I run the code i always receive an empty list back. I am not sure what the mistake is, because I modified the html tags for my country. Best regards! =D

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +1

      I cannot say something for sure without source code. Try to get the parent container of your data and only then go deeper.

    • @felix1672
      @felix1672 4 года назад +1

      hi alex - i have the same issue - did you resolve it ? - would really appreciate your help!

    • @aleksandara.9728
      @aleksandara.9728 4 года назад

      @@felix1672 I didn't solve it because eBay is in my country in JS written. Same code on Wikipedia works perfectly fine :-/

  • @user-lh4hv3tx8b
    @user-lh4hv3tx8b 4 года назад

    I'm trying to find ('a'), but I get returned none?

  • @SuperWombus
    @SuperWombus 5 лет назад

    Up to 14:06 in the tutorial, the line of code:
    h1 = soup.find('h1', id='itemTitle').find('a').get('data-mtdes') throws an error:
    File "ebay.py", line 29, in
    main()
    File "ebay.py", line 26, in main
    get_detail_data(get_page(url))
    File "ebay.py", line 19, in get_detail_data
    h1 = soup.find('h1', id='itemTitle').find('a').get('data-mtdes')
    tributeError: 'NoneType' object has no attribute 'get'

    • @RedEyedCoderClub
      @RedEyedCoderClub  5 лет назад

      It means that the .find('a') returned None - "nothing". That is BeautifulSoup didn't find the 'a' tag or 'h1' tag with your creteria.
      To see what BeautifulSoup found you can comment out subsequent method calls and print the results.
      E.g.
      h1 = soup.find('h1', id='itemTitle').find('a') #.get('data-mtdes')
      print(h1)
      or
      h1 = soup.find('h1', id='itemTitle') #.find('a').get('data-mtdes')
      print(h1)
      etc.

    • @RiteshYadav-rc1np
      @RiteshYadav-rc1np 4 года назад

      same error for me also . how you corrected?

  • @johnbreakwater448
    @johnbreakwater448 4 года назад

    BTW что-то никто не рассказывает, как скачать полное описание товара со страницы ebay. Похоже это не сильно тривиальная задача?

  • @dikranmahyu8479
    @dikranmahyu8479 3 года назад

    Çok güzel bir anlatım olmuş elinize dilinize sağlık
    ben daha çok şunu merak ediyorum ofis ortamında yapılan günlük sıkıcı bir takım işler var firmanın farklı online satış platformlarında bulunan ilanını incelemek ilanda düşüş varsa bunu analiz etmek gerekirse ilana sil yükle yapıp drop shipping uygulamak falan bu türden bir mesleği icra eden birisi için dışarıdan teknik hizmet almak çok verimli olmuyor yani bir noktadan sonra işi sistem yapınca sistemi yöneten kişi baypas oluyor bu yüzden bu konuda ilk etap da amatörce ama verim alır ve adaptasyon sağlarsam profesyonel bir eğitim alıp kariyerimde kendimi geliştirmek isterim. Konu ile araştırmalarım beni python programlama diline yöneltti başında ifade edeyim yazılım sektörü ile hiç bir alakam yok ama işim gereği araştırma ve kendimi geliştirmeye açık bir yapım var sizce böyle bir bot yazabilirmiyim botun aynı zamanda sistem tarafından fark edilmemesi lazım internette bunun ile alakalı çok çalışma var ama ileri seviye python değil kusursuz bot nasıl yazılır işim ile nasıl entegre ederim bununla ilgileniyorum bu konu sizden ricam öneri ve tavsiyelerinizle bana bir yol göstermeniz.Cevabınız için şimdiden teşşekür ederim

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад

      I think that everything is possible. Just do it.

  • @kumarmannu9476
    @kumarmannu9476 5 лет назад

    The asm tutorial,i really enjoy and learn a lot!! Thanku so much.
    The only thing i want to ask ,my output is like"Leather Band Round Quartz Analog Elegant Classic Casual Men&aposs Wrist Watch New | eBay" for the title one.How to remove this '|ebay '.I checked but not getting this

    • @RedEyedCoderClub
      @RedEyedCoderClub  5 лет назад

      Just split the string by '|' and take the first element of the list.
      Something like this:
      title.split('|')[0]

    • @aaronhughes4199
      @aaronhughes4199 5 лет назад

      At least you're getting an output, mine just says 'None'. What the fuck am I supposed to do about that?

    • @RiteshYadav-rc1np
      @RiteshYadav-rc1np 4 года назад

      @@aaronhughes4199 i think u missed .get_text()

  • @sourabhrananawareyujfestbw9858
    @sourabhrananawareyujfestbw9858 3 года назад

    how to fetch data for all the pages??

    • @RedEyedCoderClub
      @RedEyedCoderClub  3 года назад +1

      the same way like the first page. Just pass in to the get_html() function urls to other pages.
      The url you can get the same way like the main content.

    • @sourabhrananawareyujfestbw9858
      @sourabhrananawareyujfestbw9858 3 года назад

      @@RedEyedCoderClub thank you!

  • @kenshinnanashi9469
    @kenshinnanashi9469 5 лет назад

    Hi , cool tutorial, but why wouldnt you prefer to use Ebay API ?

  • @RiteshYadav-rc1np
    @RiteshYadav-rc1np 4 года назад

    when I am printing title(variable) its giving output as None

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +1

      It means that Beautiful Soup didn't find what you said.

    • @RiteshYadav-rc1np
      @RiteshYadav-rc1np 4 года назад

      @@RedEyedCoderClub but i have written same code as yours

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      My code is working... To be exactly it worked at that moment.
      And also it's not a recipe, it's just a core idea. Check the object you've got. Check how you've got it.

    • @jmb283gt3
      @jmb283gt3 4 года назад

      I had the same issue. "soup.find('h1', id='itemTitle')" for me yields a completely different result than what is on his screen. What I see doesn't have two languages and there is just an h1-class and no a-class, so the "find('a')" portion is finding nothing on my side. I worked around this by using what he showed us in the price section by dropping everything after "find('a') and replacing with ".text"

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +1

      You are right. I checked right now Ebay.com via US proxy, and the HTML structure differs from the one in the video.
      To get items we have to use
      soup.find('ul', class_='srp-results').find_all('li')
      that's we have to get the *UL* with all search results and get all *LI* tags. And each *LI* tag has *a* tag with *s-item__link* class.
      And the *h3* tag instead of *h1* ...
      imgur.com/IHmkicX
      But the idea of how to get data is absolutely the same. And the video is just a demonstration of that idea.

  • @denisdarwasaputra6134
    @denisdarwasaputra6134 4 года назад

    Does eBay allow to scrape?

  • @freddibiri
    @freddibiri 4 года назад

    windows only has one pip3 :(

  • @dendisega1675
    @dendisega1675 5 лет назад

    А можно такое же видео, но по Авито? И Вы не показали как автоматически менять страницы с 1 на 2 и т.д

  • @eyyubaydin1370
    @eyyubaydin1370 3 года назад

    If you got something like this.
    Sportlife (You can use .text to delete all that other stuff)
    titel = soup.find('h1').find('span').text
    Best tutorial about webscraping on RUclips

  • @python360
    @python360 4 года назад

    Why didn't you use an eBay API? - would have saved a lot of code....and they're free.

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +1

      Because there are some reasons not to use API. Like youtube-dl does.
      Also an API usage is not a web scraping at all.

  • @varanakonda
    @varanakonda 5 лет назад +1

    you have a nice russian accent :)

  • @nagltres
    @nagltres 4 года назад

    ........te amo...

  • @alexhernandez8550
    @alexhernandez8550 4 года назад

    I would not recommend web scraping a website cause You could get In trouble and get Banned or worse if anything use a API if provided

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      You are a bit wrong. I have several examples how people got troubles when used API.
      And on the other hand - youtube-dl doesn't use RUclips API at all and it's a great tool.
      Is there a risk to be banned? Yes, of course. But it's not a problem, we just need to be careful.
      To use or not to use API - it's just a matter of a personal preference.

  • @mmanuel6874
    @mmanuel6874 5 лет назад

    where is the code

  • @mys5108
    @mys5108 5 лет назад

    Can I have the code?thx

  • @TheAstralftw
    @TheAstralftw 5 лет назад

    i fucking love russian accent. i hate listening tutorials with indians.. i dont have nothing against them, just cant listen to them.. but russian.. i love that language.. i know little russian.. ya ponimayu russky nemnogo :D croatian is also very similar to russian . cheers.

  • @mayankdevnani894
    @mayankdevnani894 5 лет назад

    Which OS are you using ?

  • @zachpawlik7671
    @zachpawlik7671 3 года назад

    Are you French?

  • @viktor3512
    @viktor3512 4 года назад

    Ты из СНГ?

  • @aravindbedean
    @aravindbedean 4 года назад

    when I tried the split command for price and currency, it say
    AttributeError: 'list' object has no attribute 'split'
    please can you help me?

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад +2

      You used method that returned a list (.find_all() for example), and then tried to split it.

    • @aravindbedean
      @aravindbedean 4 года назад

      Red Eyed Coder Club okay, I will let you know once I try findall and then tried to split it

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      The .find_all() method will return a list again, and you'll get the exception again.
      The .split() method is the method of strings. And only a string object can call it.

    • @aravindbedean
      @aravindbedean 4 года назад

      Red Eyed Coder Club so what should I do to not make that error ?

    • @RedEyedCoderClub
      @RedEyedCoderClub  4 года назад

      to focus your attention firstly on understanding and awareness at what you are doing and what is happen.