Scraping Data from a Real Website | Web Scraping in Python

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2024

Комментарии • 377

  • @jorge.roques5533
    @jorge.roques5533 7 месяцев назад +165

    Honestly I love that you include your missteps in your tutorials for several reasons. It makes coding seem more human, it also shows us that even content creators and great programmers can have missteps that they need to go back and fix which is usually edited out of other tutorial videos. Not to mention there might be people having the same issues without understanding why and you explain it so its almost a mini tutorial on debugging and your programmer thought process. Overall it was an easy 25 minutes to spend watching this. Thank you.

    • @nocturnalcb
      @nocturnalcb 7 месяцев назад +1

      Exactly😁

    • @NaderTheExpert
      @NaderTheExpert 2 месяца назад +3

      Yes, I agree 100%. After following the video from beginning to the end, I finally figured out to get same results. What made it more challenging for me was that the 1. website html has changed and 2. content of table we are scraping was updated. So, to find the table, I change index of 1 to 2, still not getting right table, so I changed it from 2 to 0. After learning the thought process and getting the right table, I spoil myself and asked ChatGPT. and ChatGPT's code was much better for scraping but as you said it is not as human with mistakes and we learn from the mistake.

    • @Web.Scraping
      @Web.Scraping 2 месяца назад

      👏👍

    • @johnhudson9558
      @johnhudson9558 17 дней назад

      Hello how are you doing

    • @MiriJane214
      @MiriJane214 16 дней назад

      It's really good to see the different ways you go about solving problems and how you stay so calm about it Alex! The webpage has changed now so, soup.find('table') now finds the correct table. But knowing how to find it when we might need to use an index in future is really helpful.

  • @Charlay_Charlay
    @Charlay_Charlay 10 месяцев назад +53

    12:21 I literally stopped when i couldn't figure out why i was getting extra titles when i pulled the titles. I'm so glad that you showed your Rookie mistake. Everyone please watch Alex's videos in full before stopping the video. Thank you for showing your mistakes.

    • @chrille91
      @chrille91 8 месяцев назад +6

      In fact, YOUR approach is the correct way of solving such issues!
      Trying to figure out the error on your own is the ACTUAL learning taking place!
      Always try for yourself first, before you have a look at the solution. Otherwise you might fall victim to the fake-learning trap.

  • @aaronklingensmith159
    @aaronklingensmith159 11 месяцев назад +69

    Alex: when I needed to learn SQL for my first analyst job as a career changer, you were there with videos to help me do so. Now I'm in a role that is using more python and once again, you're there! Really appreciate all the work you are putting into creating content to help people!

  • @EKTurduckin
    @EKTurduckin Год назад +29

    Last year I got a job as a BI Analyst and I've been watching your stuff here and there. This video is hands down one of the best videos I've watched of yours.
    I had to take multiple tables, pivot them, and label them with the table name and this video 100% helped me get there. I had run into my own set of issues, but not far removed from your sections of mistakes, so thank you for not letting those hit the cutting room floor.
    Anyway, keep up the great work and thanks so much!

  • @francescab1413
    @francescab1413 Год назад +14

    I'm so glad you make mistakes and show us where to check if something goes wrong! It's my main problem when I have to work on my own after a tutorial, I mess up and don't ever know where to start to clean up my mess.

  • @markrarey3834
    @markrarey3834 7 дней назад

    Alex, for those folks that are running this example currently, it appears that they removed the first table so the index has moved from [1] to [0]. (@ 8:42) Great job on this class. Love it!!!

  • @saudtechtips8674
    @saudtechtips8674 9 месяцев назад +6

    my mind is blown after watching the whole video i didnt imagine this could be done by python.i have to watch it again!what a person you are Alex!

  • @tailinghwang5480
    @tailinghwang5480 3 месяца назад +2

    I had struggled with learning web scraping for a long time and had nearly given up, but your video made all the difference. Thanks to your clear and effective guidance, I finally succeeded. I truly appreciate it!

  • @prasad_create2687
    @prasad_create2687 Год назад +5

    Thank you, I learnt basics of python yesterday(had learnt C+ 8 yrs back so it was easy to relate) and I am a mechanical engineer but want to get into Product. This video was useful to learn and will modify it for other websites hopefully. Thanks again!

  • @alex_t_jones
    @alex_t_jones 3 месяца назад +4

    for anyone else who may have ran into the same issue. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table so instead had to use index 0 to get the correct table.

  • @SupCortez
    @SupCortez Год назад +4

    Just finished google data analyst certification, you about to help me make my portfolio look phat with scraping my own data before I do my whole hypothesis and data vis

  • @Nomuz32
    @Nomuz32 Год назад +19

    Hi Alex, thank you a lot for all the videos. I'm currently doing a change of career to data analyst, and you are giving me more than just a little help with all your courses. Thanks for all

  • @mohammed-hananothman5558
    @mohammed-hananothman5558 2 месяца назад

    was following the tutorial and decided to do something 'crazy'.
    I appended all the 'individual_row_data' to a new list and used
    pd.DataFrame(data=full_data, columns= table_headers)
    Thank you for the tutorial Alex :)

  • @OlgaW_Lavender
    @OlgaW_Lavender 4 месяца назад

    Alex, please accept my deepest gratitude for the time and effort you have put into this entire series. Your method is clear and easy to follow in real time, and your unique feature of keeping moments of uncovering errors and looking for solutions is invaluable. I may speak for many of your viewers in sharing that it carries a strong message that errors happen and they can be fixed. You teach us to think through the code, not apply it mechanically.

  • @sj1795
    @sj1795 11 месяцев назад +16

    This was one of my FAVORITE projects in your series so far! It was SUPER interesting and HELPFUL/USEFUL. I can see using this info for many future projects.
    P.S. I LOVE that you included the "rooky mistake" because that is definitely something I would do and then NOT be able to figure out for an hour. These included "mistakes" are such valuable lessons for people in your audience like me. :) P.P.S. I really appreciate how you summarize what we do in each video/project at the end. It's these extra details that make your instruction = A+, not just an A. Also, thank you for including the index = False. As always, THANK YOU ALEX!! You ROCK!

  • @Kicsa
    @Kicsa Год назад +5

    I saw all the videos for this playlist and I am getting to this last one, I haven't felt so happy to learn in a while, thank you for your work and help!

  • @noob4head
    @noob4head Год назад +4

    Thank you for this video with a extremely clear explanation. I always wonder why my college professors can't explain something as clearly as some people on RUclips can.

  • @alijatoi2671
    @alijatoi2671 2 месяца назад

    your way of teaching is best honestly there are lots of youtube channels with lots of courses but i like the way you teachs ❤

  • @mikeg4691
    @mikeg4691 Год назад +7

    I found out why the class names were different. It seems to be a common issue. Someone explained it on Stack Overflow,
    "The table class wikitable sortable jquery-tablesorter does not appear when navigating the website until the column is sorted. I was able to grab exactly one table by using the table class wikitable sortable."

  • @sojourner5294
    @sojourner5294 7 месяцев назад

    Completely quick, efficient and clear, really appreciate your effort and content Alex ! Thank You !

  • @traetrae11
    @traetrae11 Год назад +2

    Thank you for doing this Alex. I learned a lot and followed along while watching this series so that I could learn how to do this as well. Now all I need to do is practice, practice, practice.

  • @mrunalketankumargandhi2673
    @mrunalketankumargandhi2673 2 месяца назад

    At 7:33, I think the reason for getting a NoneType object as output is because we're using the find() with index [1]. find_all() would probably make more sense if we're sticking with the index, since find() is supposed to return only the first instance i.e. index[0]. Nevertheless, excellent content as always! Truly appreciate the efforts!

  • @dhruvpatel5783
    @dhruvpatel5783 14 дней назад +1

    Honestly, I love to create the project and learnings, thank u so much

  • @whitey9933
    @whitey9933 9 месяцев назад

    Thanks for the tutorial,
    Was always told not to add to a dataframe row by row (probably slower for much larger data),
    so I appended to a list and created a Dataframe off that - pd.DataFrame(company_list, columns=world_table_titles).set_index(['Rank'])

  • @yunusaprianus736
    @yunusaprianus736 Год назад +1

    I'm done with the tutorial today and end with awesome successful, i'm facing some trouble since i use different site but yeah, my scraping going well!
    Thank you so much!

  • @eatersdaily
    @eatersdaily 9 месяцев назад

    dude it's awesome ! just keep teaching. short, empty of long stories, useful and update data! that's all i want always.

  • @sgntsids
    @sgntsids 5 месяцев назад

    Going through this series for a personal project, such wonderful content! For the class tags, it seems like when there's a space, bs4 ignores the 2nd "part". For instance, in my project I'm seeing the element and I just need to ignore the "list-unstyled" part for the soup.find to work.
    Didn't read through all the comments here so you might have already figured that out and shared, but wanted to comment anyway. Cheers!

  • @paullemaron5258
    @paullemaron5258 9 месяцев назад +1

    Hey Alex, I am so proud of the amazing job you are doing, thank you for the amazing project, I am studying for a job interview tomorrow and I know I will ace it coz Alex is my teacher.

    • @markchinwike6528
      @markchinwike6528 9 месяцев назад

      Hello. How did it go with the interview? Just to help us transition into the industry.

    • @paullemaron5258
      @paullemaron5258 9 месяцев назад

      @@markchinwike6528 Hello sir, I had the interview and it was a success, It majorly focused on SQL and the skills here are more than enough. I have the second interview in two weeks from now.

  • @gravenoxnero
    @gravenoxnero Год назад +6

    Thanks, Alex!
    This was a really helpful lesson and project. This helped me get a better understanding of web scrapping and restructuring the data. Now, I feel confident in applying this to a project I've been working on.

  • @Web.Scraping
    @Web.Scraping 2 месяца назад

    Nicely explained and very simple 👍, but for someone who has little understanding of programming, it can be a problem. For example, I collect this data in a few clicks 😉

  • @papadenj8262
    @papadenj8262 20 дней назад

    This got very enjoyable at the end when I exported it as csv😁Thanks for this man

  • @alex_t_jones
    @alex_t_jones 3 месяца назад

    for anyone else who may have ran into the same issue of the table find/find_all not looking the same, here's what happened. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table. So instead, I had to use index 0 to get the correct table. Hope this helps!

  • @camomile4085
    @camomile4085 4 дня назад

    Thank you so much for sharing your valuable lesson free. Wishing you continued success and growth in your career!

  • @izzyvickers6258
    @izzyvickers6258 Год назад +2

    You made this wayyyy easier than I thought it would be! Worth a sub from me sir!

  • @jeet611_
    @jeet611_ Год назад +2

    Thanks alot Alex it helped me alot to explore this Webscraping and thanks for making this interesting and on point

  • @louisamkeyakala9420
    @louisamkeyakala9420 Год назад +4

    the way i was waiting for this video😂..thank you Alex

  • @ZeuSonRed
    @ZeuSonRed Год назад +1

    This was from the Greatest Videos I have Ever seen Thank you! Very Much! 🙃🙃🙃🙃🙃🙃😊

  • @chuene2666
    @chuene2666 4 месяца назад

    This man is a life saver😭😭... Thank you sir❤️❤️

  • @dynamickaushik2568
    @dynamickaushik2568 27 дней назад

    Hello Alex Sir,
    1. First of all, your work and teaching skills are quite remarkable. You make the learning process easy and smooth, which is also helping numerous learners.
    2. I am following through the whole process side by side but at the end, the number of rows and columns becomes (400,7) when I apply a df.shape function. On the other hand , when I look closely you have only (100,7). I need some guidance on that. Please resolve my issue.
    3. Eagerly waiting for your reply.
    Thank You.

  • @Autoscraping
    @Autoscraping 10 месяцев назад

    A fabulous video that has been of great help in orienting our new collaborators. Your generosity is highly valued!

  • @cityoflaredoopendatadivisi9197
    @cityoflaredoopendatadivisi9197 5 месяцев назад

    very helpful video. love the troubleshooting as you go, and simple explanation of how you're working through this. thank you.

  • @gabrielledatascience
    @gabrielledatascience 6 месяцев назад

    If anyone is having issues around 13:31 when we state the dtaaframe columns, try adding
    , dtype='object'
    after world_table_titles so that the data type of the column headers can be set. mine had that issue and thought that I could share :)

  • @Vikash-the-analyst
    @Vikash-the-analyst 6 месяцев назад

    Honestly, very informative and this help me very well to learn this topic. Explanation of every code is very useful. Thanks for making this informative video.

  • @v1s1v
    @v1s1v Год назад +2

    Nice tutorial, but there are AI tools now like Kadoa that can do all of this for you. In the time it takes for you to watch this video, you can get an AI scraper up and running.

  • @leonardnewbill793
    @leonardnewbill793 Год назад +1

    Super excited to finish the lesson! Thank you sir. I appreciate it!

  • @artemboichenko743
    @artemboichenko743 Год назад +16

    Hi Alex! Super helpful video, thank you! One detail though: Growth index is not always positive. We may see in the wiki table negative and positive values are present in that column. Instead of using ‘-‘ for negative value, that table uses small triangles. Could you show us how to manage that - to convert those triangles into positive or negative values accordingly?

    • @ridanaeem1012
      @ridanaeem1012 Год назад

      hey, any workaround for this?

    • @pawledz
      @pawledz 10 месяцев назад

      I am sure that there is a better way to handle this, but this will work:
      df = pd.DataFrame(columns = world_table_titles)
      df
      column_data = table.find_all('tr')
      for row in column_data[1:]:
      row_data = row.find_all('td')
      row_table_data = [data.text.strip() for data in row_data]
      if row.find_all('span')[1]['title'] == 'Decrease':
      row_table_data[4] = "-" + row_table_data[4]
      length = len(df)
      df.loc[length] = row_table_data

  • @ayushsinghrawat1409
    @ayushsinghrawat1409 6 месяцев назад

    I hands on to my 1st scrapping experience with your sir

  • @raphael.dev13
    @raphael.dev13 Год назад +10

    Hey Alex!
    Thanks for the great video as always!
    Could you do a video on the repercussions and impact on the Data Analyst career now that OpenAI released their GPT Code interpreter?

  • @MarciaRibeiro-gd1wx
    @MarciaRibeiro-gd1wx 4 месяца назад +2

    You are perfect Alex. I loved this video! Thanks a lot.

  • @ibrahimmohamoudbile3424
    @ibrahimmohamoudbile3424 Год назад

    You’re a ‘God sent’ my g

  • @boeingpete
    @boeingpete 9 месяцев назад

    Excellent. Great video. Everything explained clearly and in a way I could follow. Thanks so much.

  • @vamshikrishnareddyLingam
    @vamshikrishnareddyLingam 6 месяцев назад

    one word Beautiful video it actually helped to get the client

  • @margotonik
    @margotonik 9 месяцев назад

    I loved this!!! Very good practice I enjoyed working in this project including the mistakes. Is always good to know that having errors doesn't make myself an idiot and is part of the process. Thank you so much for everything Alex I am sure we all love you as well!!

  • @abraham_o
    @abraham_o 3 месяца назад

    Your teaching method is great I do not deny that, but this is exhausting to watch.

  • @iSky950
    @iSky950 10 месяцев назад

    Very nice video Alex thanks for sharing! (I love that it's "live" and you make mistakes too, it's more human this way!)

  • @olajideayeola9534
    @olajideayeola9534 Месяц назад

    Thank you Alex!! The playlist was very helpful.

  • @dhanienugroho4323
    @dhanienugroho4323 Год назад +1

    Thanks for the tutorial! I just found the channel and I like the way you explain it!

  • @cbacca2999
    @cbacca2999 6 месяцев назад

    Hi Alex. In the Wikipedia revenue table there is a minus sign in some of the revenue rows. This is actually an extended ascii n-dash or m-dash which will appear as another character. Look for a funky character in those rows in the output. I work in the print industry and this is an inappropriate use of the n- or m-dash for us.

  • @Photoshop729
    @Photoshop729 Год назад +2

    So far on my web scraping journey I don’t know if web scraping is any faster than just manual copy paste unless you have repeated scrape requests of the same site or structure

  • @ajibadeabdulateef2818
    @ajibadeabdulateef2818 Год назад +1

    Let me start by thanking you for all the tutorials in this playlist, they are totally worth my time. thank you. what would be the reason why I have double of the data on my own part I am having 200 instead of 100 data

    • @kayleighmacdonald
      @kayleighmacdonald Год назад

      I had this error too - every time you run the 'for' loop, it adds all the rows to the dataframe again. Be sure that the dataframe is empty, and only run the for loop once before importing to CSV.

  • @yoshitamanavi530
    @yoshitamanavi530 Год назад

    I just have one comment, You are the best Alex 🤩

  • @ebamybass19
    @ebamybass19 Год назад +2

    Thank you Alex Frebeg ❤❤

  • @jalalkiswani
    @jalalkiswani 3 месяца назад

    Excellent video, thanks.
    Note: jquery classes are added at runtime and executed by browser, so they wont appear in the direct response coming from the server.

  • @kuiwang3614
    @kuiwang3614 7 месяцев назад +1

    fantastic lesson, very clear

  • @anuradhamondal1601
    @anuradhamondal1601 Год назад

    02:26 lol.. as a beginner to this and already overwhelmed with all information i recently learned, it is exactly what i would had thought!

  • @YourYTHUB
    @YourYTHUB Год назад +2

    Hey Alex, thank you so much for ur effort,,,its a really super helpful series 🙏

  • @facuceaglio1351
    @facuceaglio1351 26 дней назад

    hey alex, i had a problem in the very end, idk why excel saw the numbers as decimal numbers, so Instead of 161000 it appeared 161, the only solution I found was to put a cell above and write this:
    df["Employees"] = df["Employees"].astype(str).str.replace(",", ".")
    df["Revenue (USD millions)"] = df["Revenue (USD millions)"].astype(str).str.replace(",", ".")
    I hope it helps someone, thanks you very much for this bootcamp. i sent you a hug from argentina

  • @gabinkundwa7215
    @gabinkundwa7215 Год назад +1

    Thank you Alex, I am new to web scrapping and this video was helpful to me! Keep the good work!

    • @gameaddict3068
      @gameaddict3068 11 месяцев назад

      Check out my chanel for nice web scraping tools

  • @YouTubeVenJiX-zl4bj
    @YouTubeVenJiX-zl4bj Год назад +7

    Sir you are a real hero 🤗

  • @JananiTeklurSrinivasa
    @JananiTeklurSrinivasa Год назад +1

    Thank You so so much for this video, Alex! It was super useful and easy to follow!

  • @proud_indian0161
    @proud_indian0161 6 месяцев назад

    Great Tutorial, Got what i was looking for thanks

  • @ibikunleadekiitan9882
    @ibikunleadekiitan9882 Год назад

    Thanks Alex for making me a great value to the world

  • @ezhankhan1035
    @ezhankhan1035 9 месяцев назад

    Really helpful, thanks! You explain this muuuuch better than in the IBM Python Course haha.

    • @matrixnepal4282
      @matrixnepal4282 9 месяцев назад

      brother, did 'th' worked in you case? while i was doing it, it shows all the numbering in th too. I will really appreciate you help if you reply

    • @ezhankhan1035
      @ezhankhan1035 9 месяцев назад

      ​@@matrixnepal4282Did you do table.find_all('th')? I think Alex also made a similar mistake initially by doing soup.find_all('th'). Should be ON the 'table'

  • @ashutoshranjan4644
    @ashutoshranjan4644 6 месяцев назад

    I like your way of teaching. Looking forward to learn from you.
    Thanks for making such content

    • @japhethmutuku8508
      @japhethmutuku8508 4 месяца назад

      Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!

  • @MudassarAli-bx2pf
    @MudassarAli-bx2pf Год назад

    Excellent Work Sir!!! I really Appreciated your work believe me You are a great mentor!

  • @n0rmaLman
    @n0rmaLman 3 месяца назад

    Hi! I know it's not a pandas tutorial, but anyway, pandas can parse html by itself. Just pass your table to pandas.read_html() function.

  • @pritamlaskar7265
    @pritamlaskar7265 Год назад +1

    Thank you so much! Very clear and well explained!

  • @ZeeshanAli-ds1tm
    @ZeeshanAli-ds1tm 8 месяцев назад

    A question. How we can scrape 'td' and 'th' at the same time within same tbody < tr tags.

  • @tsubame1412
    @tsubame1412 7 месяцев назад

    Thanks, this video is really helpful for me at this moment !

  • @AtharvChaulkar
    @AtharvChaulkar 11 месяцев назад +1

    Perfect 🫶❤

  • @blackwidow2899
    @blackwidow2899 Год назад

    Wow, Alex I totally enjoyed this. You make it so easy to understand. Now I need to go through your pandas tutorial and learn data manipulation. Thanks for being there!

  • @jmc1849
    @jmc1849 8 месяцев назад

    Hi Alex (as if!)
    Thanks for all the content

  • @clovisstanford6515
    @clovisstanford6515 9 месяцев назад

    17:51 I thought you're like every other guy , But you are special Alex

  • @Larocaxx
    @Larocaxx Год назад

    We love you too Alex ♥ thank you for such great videos

  • @Zenitsu-mq7fq
    @Zenitsu-mq7fq 8 месяцев назад

    56/74! I'm almost there Alex) Ty for your hard work. It is a really helpful bootcamp. But I have only one question for you. Why are you still a Data Analyst and are not going to be a Data Science or Data Engineer?

  • @ghimirepujya
    @ghimirepujya 5 месяцев назад

    I really salute your work . Thank you.

  • @data_surgeon
    @data_surgeon 4 месяца назад +1

    Always been helpful. Bless you❤

    • @japhethmutuku8508
      @japhethmutuku8508 4 месяца назад

      Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!

  • @sameenkunwar2231
    @sameenkunwar2231 3 месяца назад

    Thank you sir for making it more east

  • @MuhriddinIbragimov-i3k
    @MuhriddinIbragimov-i3k 2 месяца назад

    thank you bro, it is soo understable

  • @Nalla-perumal
    @Nalla-perumal 10 месяцев назад

    Simply Wow!!! handsoff!

  • @Mvjesty23
    @Mvjesty23 Год назад +1

    I’m going to do this today! Thank you Alex 😄

  • @EuricoAbel
    @EuricoAbel 8 месяцев назад

    Zeus Proxy facilitates seamless SEO monitoring and data scraping, enabling users to gather valuable insights.

  • @sumanhachappa2822
    @sumanhachappa2822 Год назад

    fantastic way of explaining things

  • @nguyenhuyhoangk18hcm37
    @nguyenhuyhoangk18hcm37 10 месяцев назад

    I am really like your project! I appreciated you

  • @moviesprobe6220
    @moviesprobe6220 Год назад +1

    Much needed video ❤

  • @martinbolio257
    @martinbolio257 6 месяцев назад

    Very very useful! Great video.

  • @anthonygordon5052
    @anthonygordon5052 Год назад

    Thanks for the videos as usual Alex !

  • @adiyansfuntime
    @adiyansfuntime 7 месяцев назад

    This is a fun project. Thanks for this.

  • @olumidekolawole707
    @olumidekolawole707 3 месяца назад

    Thank you, Alex.

  • @ghimirepujya
    @ghimirepujya 4 месяца назад

    Alex , you are great .