I just started learning python, this has been a great tutorial and helped me to better understand both web scrapping and functional program. Please make more tutorials!
This was a really helpful tutorial. When I first learned python, I didn't understand web scraping. Recently I did a lot of web dev stuff, and I randomly happened upon your video. I think I'm going to do some web scraping now.
Олег, приветствую! А чего под всевдонимом? Тренируешь Инглишь? И немного по тексту: метод get_text() имеет опции, такие как split и strip. В итоге код получается короче пример: soup.get_text(" ", strip=True)
Good video ❤, I have learn a lot from this video alone Please make video on how to scrape with email:pass in a txt file and get details like balance and co
Отличный ролик, но есть пару советов. Во-первых, цена не всегда фиксированная, а часто меняется в зависимости от вариации товара (вот пример www.ebay.com/itm/184168178192 - цена зависит от опции "AC Mains Voltage"). Во вторых, нужно сравнивать товары не про price, а по сумме price + shipping. Многие продавцы могут выставить цену $1, а доставку сделать $10. Более того, цена shipping зависит от страны из которой летит запрос в eBay. Если интересно, могу подкинуть несколько идей для следующих роликов.
THANK YOU! 28:30 - I have been stuck on this particular problem for almost two days. I didn't realize that you need to use list comprehension to unpack find_all(). Appending wouldn't work and I was just looking for any video that might give the answer.
I used the list comprehension just for the sake of brevity. It's equivalent to: urls = [] for item in links: urls.append(item.get('href')) And I assume that the links is not None. If you got exceptions, you have to figure out what's wrong with your data.
The process is only scraping the data from the link provided but not the other index pages from pagination. How can I automate in order to get data from all the pages?
Hello. I have my code copied as you have written, but when I go to print h1 the command window displays None. Any suggestions? Good tutorial regardless!
I followed every step on till 14:00, but I get a empty print from the script, but the script finished without error, something is wrong here. I can change URL to anything any page or even blank and the script finishes without error but empty data...
Hello. I want to build a website that compares the price of a product across different websites. How do I display the data which I have scraped on my website?
I have a question, I just did your tutorial (first of all - thank you! it helped me a lot!) but with my url the web scraper doesn't go forward to site 2 and 3 and so on... Did I miss that part?
@@RedEyedCoderClub at minute 25:30 you changed the url in the script that you only have to change the last part from the url (pgn=1, pgn=2 and so on) to go trough all pages to scrape them or did I get this wrong?
and in your Code I cannot find such a function, to scrape all the products from page 1, then going to the second page and scrape all products there and so on
Çok güzel bir anlatım olmuş elinize dilinize sağlık ben daha çok şunu merak ediyorum ofis ortamında yapılan günlük sıkıcı bir takım işler var firmanın farklı online satış platformlarında bulunan ilanını incelemek ilanda düşüş varsa bunu analiz etmek gerekirse ilana sil yükle yapıp drop shipping uygulamak falan bu türden bir mesleği icra eden birisi için dışarıdan teknik hizmet almak çok verimli olmuyor yani bir noktadan sonra işi sistem yapınca sistemi yöneten kişi baypas oluyor bu yüzden bu konuda ilk etap da amatörce ama verim alır ve adaptasyon sağlarsam profesyonel bir eğitim alıp kariyerimde kendimi geliştirmek isterim. Konu ile araştırmalarım beni python programlama diline yöneltti başında ifade edeyim yazılım sektörü ile hiç bir alakam yok ama işim gereği araştırma ve kendimi geliştirmeye açık bir yapım var sizce böyle bir bot yazabilirmiyim botun aynı zamanda sistem tarafından fark edilmemesi lazım internette bunun ile alakalı çok çalışma var ama ileri seviye python değil kusursuz bot nasıl yazılır işim ile nasıl entegre ederim bununla ilgileniyorum bu konu sizden ricam öneri ve tavsiyelerinizle bana bir yol göstermeniz.Cevabınız için şimdiden teşşekür ederim
Good code.. everything worked perfect except page iterations and table title / headers in csv file. For table titles, I will have to take CSV creation outside of the loop and only append method goes in loop. For page iterations, I observed something wierd. This code is not supposed to go to page 2, however my code went half through page 2. I need to update it to scrap all the pages.
I'm trying to do this for sold listings, but sometimes I get an empty list, sometimes I don't. I'm assuming Ebay is trying to prevent scraping sold listings?
I have followed your code exactly the way you worked in the video but in my output(csv) file, I am getting null for everything except for links column. Would there be a way you could take a look on my code? Thanks!
in your code on video in the first if statement after you are using else i write it as yours but wehen i run it it gave invalid syntax error . before else you used print function between if and else if you can answer to me i will be glad.
Having trouble with fetching the product name, seems like the format has changed. I sorta fixed it with: title = soup.find("h1", id="itemTitle").text title = title.lstrip("Details about \xa0")
The asm tutorial,i really enjoy and learn a lot!! Thanku so much. The only thing i want to ask ,my output is like"Leather Band Round Quartz Analog Elegant Classic Casual Men&aposs Wrist Watch New | eBay" for the title one.How to remove this '|ebay '.I checked but not getting this
Hi , i am a beginner and i start to follow your steps , but when i run it nothing is displayed in console , no issues also no information about the header or any info from ebay , can be the issue in ebay website , maybe they secure info ? Thanks
If nothing happens, it means that you forgot something - to return a value or to call a function, for example. If there is some issues with Ebay or your code - Python will raise an exception indicating the problem. So, please, check your code
Hi Oleg, thank you very much for the tutorial! I have a question. 27:36, when I run the code i always receive an empty list back. I am not sure what the mistake is, because I modified the html tags for my country. Best regards! =D
hi,when i run in the first stage for getting a class,it writes that theres no parser in your computer?what does that mean?how can i tackle with this,,help pls(((
All was going well until I tried to do the "items sold." When I run, I get nothing returned. Can you please help me with this? I need these items, but unsure why it doesn't work. The rest was great.
The code you see in the Browser's Inspector is not the same you see with Requests library: *r.text*, because browsers execute JavaScript. So try to check the code you see with Requests (you can try to save the content of *r.text* in to a file) and check the save file with Inspector. If there is no need code, I suppose to use Selenium.
9:00 Getting en error at soup = BeautifulSoup(response.text,'lxml') this gives: File "/home/user/anaconda3/lib/python3.7/site-packages/bs4/builder/_lxml.py", line 128, in __init__ super(LXMLTreeBuilderForXML, self).__init__(**kwargs) TypeError: super(type, obj): obj must be an instance or subtype of type Any idea what to do :-(?
@@RedEyedCoderClub thank you for the response, I already solved my problem, now I can't extract all the links to this URL www.ebay.com/sh/research?dayRange=30&endDate=1591483242640&format=FIXED_PRICE&keywords=Don+C%09Cal+Ripken+Jr.++project+2020&marketplace=EBAY-US&queryCondition=AND&startDate=1588977642640&tabName=ACTIVE I need all the datas there but i can scrape one at a time . is it possible to get all the datas like in the video with that link?
The core idea is the same. You have to get a link to the next page, and make a request to it... In most cases there'll be a pagination bar at the bottom of a page. Look at the links of other pages and you'll probably notice the difference between them. They all will differ in one parameter (page number). In this case each page has the _pgn= parameter, and the value of the parameter is a number of a page. By changing the value of the _pgn parameter you'll get the next page. Consider to watch my last video - there is a useful trick to get all pages. ruclips.net/video/3fcKKZMFbyA/видео.html
Why do i get as a seller on ebay a 405 false message? I try to copy automaticly my selled items (name of article) with a item number in the discription if the item in a excel. it wont work... :(
What video should I make next? Any suggestions? *Write me in comments!* Follow me @: Telegram: t.me/red_eyed_coder_club Twitter: twitter.com/CoderEyed Facebook: fb.me/redeyedcoderclub Help the channel grow! Please Like the video, Comment, SHARE & Subscribe!
Up to 14:06 in the tutorial, the line of code: h1 = soup.find('h1', id='itemTitle').find('a').get('data-mtdes') throws an error: File "ebay.py", line 29, in main() File "ebay.py", line 26, in main get_detail_data(get_page(url)) File "ebay.py", line 19, in get_detail_data h1 = soup.find('h1', id='itemTitle').find('a').get('data-mtdes') tributeError: 'NoneType' object has no attribute 'get'
It means that the .find('a') returned None - "nothing". That is BeautifulSoup didn't find the 'a' tag or 'h1' tag with your creteria. To see what BeautifulSoup found you can comment out subsequent method calls and print the results. E.g. h1 = soup.find('h1', id='itemTitle').find('a') #.get('data-mtdes') print(h1) or h1 = soup.find('h1', id='itemTitle') #.find('a').get('data-mtdes') print(h1) etc.
The .find_all() method will return a list again, and you'll get the exception again. The .split() method is the method of strings. And only a string object can call it.
Hey, I copied your code but when i run, it gives error. I can't find out why it gives error. Errors: Traceback (most recent call last): File "ebay.py", line 78, in main() File "ebay.py", line 74, in main write_csv(data, link) File "ebay.py", line 64, in write_csv writer.writerow(row) File "D:\Installation\Anaconda\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f381' in position 0: character maps to
My code is working... To be exactly it worked at that moment. And also it's not a recipe, it's just a core idea. Check the object you've got. Check how you've got it.
I had the same issue. "soup.find('h1', id='itemTitle')" for me yields a completely different result than what is on his screen. What I see doesn't have two languages and there is just an h1-class and no a-class, so the "find('a')" portion is finding nothing on my side. I worked around this by using what he showed us in the price section by dropping everything after "find('a') and replacing with ".text"
You are right. I checked right now Ebay.com via US proxy, and the HTML structure differs from the one in the video. To get items we have to use soup.find('ul', class_='srp-results').find_all('li') that's we have to get the *UL* with all search results and get all *LI* tags. And each *LI* tag has *a* tag with *s-item__link* class. And the *h3* tag instead of *h1* ... imgur.com/IHmkicX But the idea of how to get data is absolutely the same. And the video is just a demonstration of that idea.
If you got something like this. Sportlife (You can use .text to delete all that other stuff) titel = soup.find('h1').find('span').text Best tutorial about webscraping on RUclips
You are a bit wrong. I have several examples how people got troubles when used API. And on the other hand - youtube-dl doesn't use RUclips API at all and it's a great tool. Is there a risk to be banned? Yes, of course. But it's not a problem, we just need to be careful. To use or not to use API - it's just a matter of a personal preference.
i fucking love russian accent. i hate listening tutorials with indians.. i dont have nothing against them, just cant listen to them.. but russian.. i love that language.. i know little russian.. ya ponimayu russky nemnogo :D croatian is also very similar to russian . cheers.
By far the best web scraping instructional video I've seen!
Thank you!
Your coding style shows that you're a very skilled developer ..... GREAT KNOWLEDGE...
i looked for videos like this a lot and i found your videos . these are perfect , thank you
Thank you very much!
Спасибо, отличные ролики!
I just started learning python, this has been a great tutorial and helped me to better understand both web scrapping and functional program. Please make more tutorials!
Glad to help. Thanks for watching!
You write code like a poet writes poems, much respect.
Thank you very much!
I really like the way You explain everything. It's very clear and precise. Thank You.
Thank you!
This is really awesome! I love how your code is so concise and logical. Thank you, helped a lot : )
Thank you!
Your code is very clean and easy to follow, keep up the good work
Thank you!
I really enjoy you tutorials! keep on doing new ones!
Thank you!
Telegram bot tutorial is ready.
ruclips.net/video/cX8m3sp_w84/видео.html
40:28 Oh my God. xD Indeed mate, indeed. Great coding skills! And nice tutorial!
**the Cheapest became the most POPULAR.... WOW... CANNOT BELIVE)) AND WE SPEND 1 HOUR TO get to same idea!!! NICEEEE**
Thanks for comment!
This was a really helpful tutorial. When I first learned python, I didn't understand web scraping. Recently I did a lot of web dev stuff, and I randomly happened upon your video. I think I'm going to do some web scraping now.
Thanks for watching!
thank you so much, i learned a lot here
Thank you!
Thanks! A brilliant intro to web scraping for a beginner like me.
Thank you! Glad the video was helpful.
Thank you! Very clear the way you explain.
Thank you
My final degree project is about this, amazing!
Glad that it helped!
Fantastic !! Keep it up....
Thank you!
Very good, I am going to try a similiar project. Thanks.
Олег, приветствую!
А чего под всевдонимом? Тренируешь Инглишь?
И немного по тексту:
метод get_text() имеет опции, такие как split и strip. В итоге код получается короче
пример: soup.get_text(" ", strip=True)
Thanks for comment!
Good video ❤, I have learn a lot from this video alone
Please make video on how to scrape with email:pass in a txt file and get details like balance and co
Отличный ролик, но есть пару советов. Во-первых, цена не всегда фиксированная, а часто меняется в зависимости от вариации товара (вот пример www.ebay.com/itm/184168178192 - цена зависит от опции "AC Mains Voltage"). Во вторых, нужно сравнивать товары не про price, а по сумме price + shipping. Многие продавцы могут выставить цену $1, а доставку сделать $10. Более того, цена shipping зависит от страны из которой летит запрос в eBay. Если интересно, могу подкинуть несколько идей для следующих роликов.
Thanks for comment!
Thanks for this tutorial.
How are you looping through pages of a listing? How do you know when to stop adding +1 to pgn?
I have the same question please
Thanks for comment!
super! thank you very much
Thanks for watching!
Spasibo bol'shoe za tutorial.
Thanks for comment!
THANK YOU!
28:30 - I have been stuck on this particular problem for almost two days. I didn't realize that you need to use list comprehension to unpack find_all(). Appending wouldn't work and I was just looking for any video that might give the answer.
I used the list comprehension just for the sake of brevity.
It's equivalent to:
urls = []
for item in links:
urls.append(item.get('href'))
And I assume that the links is not None.
If you got exceptions, you have to figure out what's wrong with your data.
I was using for item in links: item.get('href), that didn't work. When using [item.get('href') for item in links], it did work.
it's weird
The process is only scraping the data from the link provided but not the other index pages from pagination. How can I automate in order to get data from all the pages?
Thanks for comment!
awesome thank you
Thanks for watching!
thank you, i learned a lot
Glad to hear! Thanks for watching!
Great video. Also, +1 for manjaro. Not the lightest distro, but it's one of my favorites.
Thank you. But I use Mint.
@@RedEyedCoderClub My mistake... but I also love Mint! It's running on my laptop I use for testing scripts. Great choice.
Hello. I have my code copied as you have written, but when I go to print h1 the command window displays None. Any suggestions? Good tutorial regardless!
check the html code you got from Requests.
>>> print(r.text)
or save it to an .html file, then open in a browser and examine the code with Inspector.
I followed every step on till 14:00, but I get a empty print from the script, but the script finished without error, something is wrong here. I can change URL to anything any page or even blank and the script finishes without error but empty data...
same error for me. have u solved yet?
@@RiteshYadav-rc1np nope...
@@37even i solved it if you want i can share
@@RiteshYadav-rc1np what would be really cool, thanks in advance!
@@RiteshYadav-rc1np hi could you also share the code with me - i have the same issue!
Hello.
I want to build a website that compares the price of a product across different websites. How do I display the data which I have scraped on my website?
Thanks for comment!
I have a question, I just did your tutorial (first of all - thank you! it helped me a lot!) but with my url the web scraper doesn't go forward to site 2 and 3 and so on... Did I miss that part?
"web scraper doesn't go forward to site 2 and 3 and so on"
What do you mean?
@@RedEyedCoderClub at minute 25:30 you changed the url in the script that you only have to change the last part from the url (pgn=1, pgn=2 and so on) to go trough all pages to scrape them or did I get this wrong?
and in your Code I cannot find such a function, to scrape all the products from page 1, then going to the second page and scrape all products there and so on
I scrape only the first page. Other pages you can get by incrementing png= parameter in the URL
You got good voice ❤️
Çok güzel bir anlatım olmuş elinize dilinize sağlık
ben daha çok şunu merak ediyorum ofis ortamında yapılan günlük sıkıcı bir takım işler var firmanın farklı online satış platformlarında bulunan ilanını incelemek ilanda düşüş varsa bunu analiz etmek gerekirse ilana sil yükle yapıp drop shipping uygulamak falan bu türden bir mesleği icra eden birisi için dışarıdan teknik hizmet almak çok verimli olmuyor yani bir noktadan sonra işi sistem yapınca sistemi yöneten kişi baypas oluyor bu yüzden bu konuda ilk etap da amatörce ama verim alır ve adaptasyon sağlarsam profesyonel bir eğitim alıp kariyerimde kendimi geliştirmek isterim. Konu ile araştırmalarım beni python programlama diline yöneltti başında ifade edeyim yazılım sektörü ile hiç bir alakam yok ama işim gereği araştırma ve kendimi geliştirmeye açık bir yapım var sizce böyle bir bot yazabilirmiyim botun aynı zamanda sistem tarafından fark edilmemesi lazım internette bunun ile alakalı çok çalışma var ama ileri seviye python değil kusursuz bot nasıl yazılır işim ile nasıl entegre ederim bununla ilgileniyorum bu konu sizden ricam öneri ve tavsiyelerinizle bana bir yol göstermeniz.Cevabınız için şimdiden teşşekür ederim
I think that everything is possible. Just do it.
I'm curious. Why use 'if not' instead of 'if' when printing if not response. ok:
? Thank you
It's a bit shorter
Good code.. everything worked perfect except page iterations and table title / headers in csv file.
For table titles, I will have to take CSV creation outside of the loop and only append method goes in loop.
For page iterations, I observed something wierd. This code is not supposed to go to page 2, however my code went half through page 2. I need to update it to scrap all the pages.
Check your code
I'm trying to do this for sold listings, but sometimes I get an empty list, sometimes I don't. I'm assuming Ebay is trying to prevent scraping sold listings?
IMO if Ebay will prevent you from scraping its pages it would hardly possible to scraping anything.
@@RedEyedCoderClub Looks like ebay adds a captcha to prevent scraping on sold listings. Probably why they restricted the api for it as well.
I got stuck here, too. I really need that sold number. Is there any workaround?
Thanks dude, I found it useful! The fact that Ebay formatting is so irregular kind of annoys me lol
Thanks for watching
I have followed your code exactly the way you worked in the video but in my output(csv) file, I am getting null for everything except for links column. Would there be a way you could take a look on my code?
Thanks!
Ok, you can use, for example, Pastebin for that
Is there a one you follow or recommend?
pastebin.com/
Red, Is there any way to automate create a listing on eBay by Python?
I think yes, but I have to elabrate the issue
in your code on video in the first if statement after you are using else i write it as yours but wehen i run it it gave invalid syntax error . before else you used print function between if and else if you can answer to me i will be glad.
Did you check your code twice?
five times
Having trouble with fetching the product name, seems like the format has changed.
I sorta fixed it with:
title = soup.find("h1", id="itemTitle").text
title = title.lstrip("Details about \xa0")
It's great!
Hi, this is a lifesaver - I was struggling. Can you tell me how you fixed it and came to this conclusion?
wow thanks a lot!
Thanks for watching!
The asm tutorial,i really enjoy and learn a lot!! Thanku so much.
The only thing i want to ask ,my output is like"Leather Band Round Quartz Analog Elegant Classic Casual Men&aposs Wrist Watch New | eBay" for the title one.How to remove this '|ebay '.I checked but not getting this
Just split the string by '|' and take the first element of the list.
Something like this:
title.split('|')[0]
At least you're getting an output, mine just says 'None'. What the fuck am I supposed to do about that?
@@aaronhughes4199 i think u missed .get_text()
how to fetch data for all the pages??
the same way like the first page. Just pass in to the get_html() function urls to other pages.
The url you can get the same way like the main content.
@@RedEyedCoderClub thank you!
Hi , i am a beginner and i start to follow your steps , but when i run it nothing is displayed in console , no issues also no information about the header or any info from ebay , can be the issue in ebay website , maybe they secure info ? Thanks
If nothing happens, it means that you forgot something - to return a value or to call a function, for example.
If there is some issues with Ebay or your code - Python will raise an exception indicating the problem.
So, please, check your code
@@RedEyedCoderClub tthank you!
Hi Oleg, thank you very much for the tutorial! I have a question. 27:36, when I run the code i always receive an empty list back. I am not sure what the mistake is, because I modified the html tags for my country. Best regards! =D
I cannot say something for sure without source code. Try to get the parent container of your data and only then go deeper.
hi alex - i have the same issue - did you resolve it ? - would really appreciate your help!
@@felix1672 I didn't solve it because eBay is in my country in JS written. Same code on Wikipedia works perfectly fine :-/
hi,when i run in the first stage for getting a class,it writes that theres no parser in your computer?what does that mean?how can i tackle with this,,help pls(((
You need to install lxml
pip install lxml
Hi , cool tutorial, but why wouldnt you prefer to use Ebay API ?
Thanks for comment!
All was going well until I tried to do the "items sold." When I run, I get nothing returned. Can you please help me with this? I need these items, but unsure why it doesn't work. The rest was great.
The code you see in the Browser's Inspector is not the same you see with Requests library: *r.text*, because browsers execute JavaScript.
So try to check the code you see with Requests (you can try to save the content of *r.text* in to a file) and check the save file with Inspector.
If there is no need code, I suppose to use Selenium.
9:00 Getting en error at soup = BeautifulSoup(response.text,'lxml')
this gives:
File "/home/user/anaconda3/lib/python3.7/site-packages/bs4/builder/_lxml.py", line 128, in __init__
super(LXMLTreeBuilderForXML, self).__init__(**kwargs)
TypeError: super(type, obj): obj must be an instance or subtype of type
Any idea what to do :-(?
Look for the part of the traceback regarded to your own code. You can do it by the paths of the modules.
Strange result... program writes the same listing 5 or 6 times in the CSV file before moving onto the next listing. Any idea why?
Check your code please
how about the code for quantity available? can you help me please? i dont know if it is span or what class or id.. please help
span is
id is id="smth"
class is class="smth"
span tag with id and class is:
...
etc.
To scrape successfully you need to know basics of HTML and CSS
@@RedEyedCoderClub thank you for the response, I already solved my problem, now I can't extract all the links to this URL www.ebay.com/sh/research?dayRange=30&endDate=1591483242640&format=FIXED_PRICE&keywords=Don+C%09Cal+Ripken+Jr.++project+2020&marketplace=EBAY-US&queryCondition=AND&startDate=1588977642640&tabName=ACTIVE I need all the datas there but i can scrape one at a time . is it possible to get all the datas like in the video with that link?
How do I do this with another ebay link and what happens if there is more than one page?
The core idea is the same. You have to get a link to the next page, and make a request to it...
In most cases there'll be a pagination bar at the bottom of a page. Look at the links of other pages and you'll probably notice the difference between them. They all will differ in one parameter (page number).
In this case each page has the _pgn= parameter, and the value of the parameter is a number of a page.
By changing the value of the _pgn parameter you'll get the next page.
Consider to watch my last video - there is a useful trick to get all pages.
ruclips.net/video/3fcKKZMFbyA/видео.html
Does eBay allow to scrape?
What's your point?
Why do i get as a seller on ebay a 405 false message? I try to copy automaticly my selled items (name of article) with a item number in the discription if the item in a excel. it wont work... :(
Thanks for comment!
Hello, how can i connect with a proxy to do this? Why recognize ebay me as a bot? Thank you for your help.
Yes, you can use proxies. I don't know why RUclips knows that you are a bot
I'm trying to find ('a'), but I get returned none?
Check you code and HTML in Inspector
What video should I make next? Any suggestions? *Write me in comments!*
Follow me @:
Telegram: t.me/red_eyed_coder_club
Twitter: twitter.com/CoderEyed
Facebook: fb.me/redeyedcoderclub
Help the channel grow! Please Like the video, Comment, SHARE & Subscribe!
I am using the US server and following your code while fixing the things that are different. But still I cant extract the hrefs. Please help
Thanks for comment!
Up to 14:06 in the tutorial, the line of code:
h1 = soup.find('h1', id='itemTitle').find('a').get('data-mtdes') throws an error:
File "ebay.py", line 29, in
main()
File "ebay.py", line 26, in main
get_detail_data(get_page(url))
File "ebay.py", line 19, in get_detail_data
h1 = soup.find('h1', id='itemTitle').find('a').get('data-mtdes')
tributeError: 'NoneType' object has no attribute 'get'
It means that the .find('a') returned None - "nothing". That is BeautifulSoup didn't find the 'a' tag or 'h1' tag with your creteria.
To see what BeautifulSoup found you can comment out subsequent method calls and print the results.
E.g.
h1 = soup.find('h1', id='itemTitle').find('a') #.get('data-mtdes')
print(h1)
or
h1 = soup.find('h1', id='itemTitle') #.find('a').get('data-mtdes')
print(h1)
etc.
same error for me also . how you corrected?
BTW что-то никто не рассказывает, как скачать полное описание товара со страницы ebay. Похоже это не сильно тривиальная задача?
Все тоже самое
when I tried the split command for price and currency, it say
AttributeError: 'list' object has no attribute 'split'
please can you help me?
You used method that returned a list (.find_all() for example), and then tried to split it.
Red Eyed Coder Club okay, I will let you know once I try findall and then tried to split it
The .find_all() method will return a list again, and you'll get the exception again.
The .split() method is the method of strings. And only a string object can call it.
Red Eyed Coder Club so what should I do to not make that error ?
to focus your attention firstly on understanding and awareness at what you are doing and what is happen.
Hey, I copied your code but when i run, it gives error. I can't find out why it gives error.
Errors:
Traceback (most recent call last):
File "ebay.py", line 78, in
main()
File "ebay.py", line 74, in main
write_csv(data, link)
File "ebay.py", line 64, in write_csv
writer.writerow(row)
File "D:\Installation\Anaconda\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f381' in position 0: character maps to
Windows still can't into Unicode. Try to specify encoding explicitly:
with open('output_file_name', 'w', newline='', encoding='utf-8') as ...
@@RedEyedCoderClub Thank you
**the Cheapest became the most POPULAR.... WOW... CANNOT BELIVE)) AND WE SPEND 1 HOUR TO get to same idea!!! NICEEEE** 11123
Sorry, but you didn't get the idea of the video.
when I am printing title(variable) its giving output as None
It means that Beautiful Soup didn't find what you said.
@@RedEyedCoderClub but i have written same code as yours
My code is working... To be exactly it worked at that moment.
And also it's not a recipe, it's just a core idea. Check the object you've got. Check how you've got it.
I had the same issue. "soup.find('h1', id='itemTitle')" for me yields a completely different result than what is on his screen. What I see doesn't have two languages and there is just an h1-class and no a-class, so the "find('a')" portion is finding nothing on my side. I worked around this by using what he showed us in the price section by dropping everything after "find('a') and replacing with ".text"
You are right. I checked right now Ebay.com via US proxy, and the HTML structure differs from the one in the video.
To get items we have to use
soup.find('ul', class_='srp-results').find_all('li')
that's we have to get the *UL* with all search results and get all *LI* tags. And each *LI* tag has *a* tag with *s-item__link* class.
And the *h3* tag instead of *h1* ...
imgur.com/IHmkicX
But the idea of how to get data is absolutely the same. And the video is just a demonstration of that idea.
А можно такое же видео, но по Авито? И Вы не показали как автоматически менять страницы с 1 на 2 и т.д
Thanks for comment!
........te amo...
Thanks for comment!
windows only has one pip3 :(
Never mind just use the pip you have.
where is the code
Thanks for comment!
Can I have the code?thx
Thanks for comment!
you have a nice russian accent :)
Thank you :)
Why didn't you use an eBay API? - would have saved a lot of code....and they're free.
Because there are some reasons not to use API. Like youtube-dl does.
Also an API usage is not a web scraping at all.
Which OS are you using ?
It is Linux Mint
Details about Military Leather Stainless Steel Quartz Analog Army Men's Cute Wrist Watches
I can't find data-mtdes
Probably there is no this attributes. Check the HTML code you got with Requests library (it's the code not changed by a JS code).
If you got something like this.
Sportlife (You can use .text to delete all that other stuff)
titel = soup.find('h1').find('span').text
Best tutorial about webscraping on RUclips
Check your code attentively
I would not recommend web scraping a website cause You could get In trouble and get Banned or worse if anything use a API if provided
You are a bit wrong. I have several examples how people got troubles when used API.
And on the other hand - youtube-dl doesn't use RUclips API at all and it's a great tool.
Is there a risk to be banned? Yes, of course. But it's not a problem, we just need to be careful.
To use or not to use API - it's just a matter of a personal preference.
Ты из СНГ?
Thanks for comment!
Are you French?
No :)
i fucking love russian accent. i hate listening tutorials with indians.. i dont have nothing against them, just cant listen to them.. but russian.. i love that language.. i know little russian.. ya ponimayu russky nemnogo :D croatian is also very similar to russian . cheers.
Thank you
Man you could make a lot of money doing ASMR
Tried following but ..i got unboundlocalerror . Local variable soup referenced before assignment..
How to tackle this
Attentively check your code, please.