Scraping Data from a Real Website | Web Scraping in Python

Поделиться
HTML-код
  • Опубликовано: 27 дек 2024

Комментарии • 386

  • @jorge.roques5533
    @jorge.roques5533 8 месяцев назад +178

    Honestly I love that you include your missteps in your tutorials for several reasons. It makes coding seem more human, it also shows us that even content creators and great programmers can have missteps that they need to go back and fix which is usually edited out of other tutorial videos. Not to mention there might be people having the same issues without understanding why and you explain it so its almost a mini tutorial on debugging and your programmer thought process. Overall it was an easy 25 minutes to spend watching this. Thank you.

    • @nocturnalcb
      @nocturnalcb 8 месяцев назад +1

      Exactly😁

    • @NaderTheExpert
      @NaderTheExpert 4 месяца назад +4

      Yes, I agree 100%. After following the video from beginning to the end, I finally figured out to get same results. What made it more challenging for me was that the 1. website html has changed and 2. content of table we are scraping was updated. So, to find the table, I change index of 1 to 2, still not getting right table, so I changed it from 2 to 0. After learning the thought process and getting the right table, I spoil myself and asked ChatGPT. and ChatGPT's code was much better for scraping but as you said it is not as human with mistakes and we learn from the mistake.

    • @Web.Scraping
      @Web.Scraping 3 месяца назад

      👏👍

    • @johnhudson9558
      @johnhudson9558 Месяц назад

      Hello how are you doing

    • @MiriJane214
      @MiriJane214 Месяц назад

      It's really good to see the different ways you go about solving problems and how you stay so calm about it Alex! The webpage has changed now so, soup.find('table') now finds the correct table. But knowing how to find it when we might need to use an index in future is really helpful.

  • @Charlay_Charlay
    @Charlay_Charlay 11 месяцев назад +58

    12:21 I literally stopped when i couldn't figure out why i was getting extra titles when i pulled the titles. I'm so glad that you showed your Rookie mistake. Everyone please watch Alex's videos in full before stopping the video. Thank you for showing your mistakes.

    • @chrille91
      @chrille91 9 месяцев назад +6

      In fact, YOUR approach is the correct way of solving such issues!
      Trying to figure out the error on your own is the ACTUAL learning taking place!
      Always try for yourself first, before you have a look at the solution. Otherwise you might fall victim to the fake-learning trap.

  • @aaronklingensmith159
    @aaronklingensmith159 Год назад +73

    Alex: when I needed to learn SQL for my first analyst job as a career changer, you were there with videos to help me do so. Now I'm in a role that is using more python and once again, you're there! Really appreciate all the work you are putting into creating content to help people!

  • @EKTurduckin
    @EKTurduckin Год назад +30

    Last year I got a job as a BI Analyst and I've been watching your stuff here and there. This video is hands down one of the best videos I've watched of yours.
    I had to take multiple tables, pivot them, and label them with the table name and this video 100% helped me get there. I had run into my own set of issues, but not far removed from your sections of mistakes, so thank you for not letting those hit the cutting room floor.
    Anyway, keep up the great work and thanks so much!

  • @markrarey3834
    @markrarey3834 Месяц назад +1

    Alex, for those folks that are running this example currently, it appears that they removed the first table so the index has moved from [1] to [0]. (@ 8:42) Great job on this class. Love it!!!

  • @francescab1413
    @francescab1413 Год назад +15

    I'm so glad you make mistakes and show us where to check if something goes wrong! It's my main problem when I have to work on my own after a tutorial, I mess up and don't ever know where to start to clean up my mess.

  • @saudtechtips8674
    @saudtechtips8674 10 месяцев назад +8

    my mind is blown after watching the whole video i didnt imagine this could be done by python.i have to watch it again!what a person you are Alex!

  • @taramallynn1766
    @taramallynn1766 4 дня назад

    No need to apologize for running into issues. That is what helps newbies learn. Thank you for not editing that out.

  • @sj1795
    @sj1795 Год назад +17

    This was one of my FAVORITE projects in your series so far! It was SUPER interesting and HELPFUL/USEFUL. I can see using this info for many future projects.
    P.S. I LOVE that you included the "rooky mistake" because that is definitely something I would do and then NOT be able to figure out for an hour. These included "mistakes" are such valuable lessons for people in your audience like me. :) P.P.S. I really appreciate how you summarize what we do in each video/project at the end. It's these extra details that make your instruction = A+, not just an A. Also, thank you for including the index = False. As always, THANK YOU ALEX!! You ROCK!

  • @tailinghwang5480
    @tailinghwang5480 5 месяцев назад +2

    I had struggled with learning web scraping for a long time and had nearly given up, but your video made all the difference. Thanks to your clear and effective guidance, I finally succeeded. I truly appreciate it!

  • @Nomuz32
    @Nomuz32 Год назад +19

    Hi Alex, thank you a lot for all the videos. I'm currently doing a change of career to data analyst, and you are giving me more than just a little help with all your courses. Thanks for all

  • @prasad_create2687
    @prasad_create2687 Год назад +5

    Thank you, I learnt basics of python yesterday(had learnt C+ 8 yrs back so it was easy to relate) and I am a mechanical engineer but want to get into Product. This video was useful to learn and will modify it for other websites hopefully. Thanks again!

  • @Kicsa
    @Kicsa Год назад +5

    I saw all the videos for this playlist and I am getting to this last one, I haven't felt so happy to learn in a while, thank you for your work and help!

  • @SupCortez
    @SupCortez Год назад +4

    Just finished google data analyst certification, you about to help me make my portfolio look phat with scraping my own data before I do my whole hypothesis and data vis

  • @OlgaW_Lavender
    @OlgaW_Lavender 5 месяцев назад

    Alex, please accept my deepest gratitude for the time and effort you have put into this entire series. Your method is clear and easy to follow in real time, and your unique feature of keeping moments of uncovering errors and looking for solutions is invaluable. I may speak for many of your viewers in sharing that it carries a strong message that errors happen and they can be fixed. You teach us to think through the code, not apply it mechanically.

  • @舜怡蔡
    @舜怡蔡 День назад

    Very useful, especially the summary at the last part of the video. Great job and thanks a lot!

  • @manikantaperumalla2197
    @manikantaperumalla2197 24 дня назад +1

    Hey It's very intuitive Alex, your way of explaining makes me more interest into this topic.

  • @eatersdaily
    @eatersdaily 10 месяцев назад

    dude it's awesome ! just keep teaching. short, empty of long stories, useful and update data! that's all i want always.

  • @louisamkeyakala9420
    @louisamkeyakala9420 Год назад +4

    the way i was waiting for this video😂..thank you Alex

  • @mohammed-hananothman5558
    @mohammed-hananothman5558 3 месяца назад

    was following the tutorial and decided to do something 'crazy'.
    I appended all the 'individual_row_data' to a new list and used
    pd.DataFrame(data=full_data, columns= table_headers)
    Thank you for the tutorial Alex :)

  • @mrunalketankumargandhi2673
    @mrunalketankumargandhi2673 3 месяца назад

    At 7:33, I think the reason for getting a NoneType object as output is because we're using the find() with index [1]. find_all() would probably make more sense if we're sticking with the index, since find() is supposed to return only the first instance i.e. index[0]. Nevertheless, excellent content as always! Truly appreciate the efforts!

  • @noob4head
    @noob4head Год назад +4

    Thank you for this video with a extremely clear explanation. I always wonder why my college professors can't explain something as clearly as some people on RUclips can.

  • @alex_t_jones
    @alex_t_jones 4 месяца назад +5

    for anyone else who may have ran into the same issue. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table so instead had to use index 0 to get the correct table.

  • @sojourner5294
    @sojourner5294 8 месяцев назад

    Completely quick, efficient and clear, really appreciate your effort and content Alex ! Thank You !

  • @traetrae11
    @traetrae11 Год назад +2

    Thank you for doing this Alex. I learned a lot and followed along while watching this series so that I could learn how to do this as well. Now all I need to do is practice, practice, practice.

  • @dhruvpatel5783
    @dhruvpatel5783 Месяц назад +1

    Honestly, I love to create the project and learnings, thank u so much

  • @izzyvickers6258
    @izzyvickers6258 Год назад +2

    You made this wayyyy easier than I thought it would be! Worth a sub from me sir!

  • @gravenoxnero
    @gravenoxnero Год назад +6

    Thanks, Alex!
    This was a really helpful lesson and project. This helped me get a better understanding of web scrapping and restructuring the data. Now, I feel confident in applying this to a project I've been working on.

  • @alijatoi2671
    @alijatoi2671 3 месяца назад

    your way of teaching is best honestly there are lots of youtube channels with lots of courses but i like the way you teachs ❤

  • @yunusaprianus736
    @yunusaprianus736 Год назад +1

    I'm done with the tutorial today and end with awesome successful, i'm facing some trouble since i use different site but yeah, my scraping going well!
    Thank you so much!

  • @mikeg4691
    @mikeg4691 Год назад +7

    I found out why the class names were different. It seems to be a common issue. Someone explained it on Stack Overflow,
    "The table class wikitable sortable jquery-tablesorter does not appear when navigating the website until the column is sorted. I was able to grab exactly one table by using the table class wikitable sortable."

  • @ZeuSonRed
    @ZeuSonRed Год назад +1

    This was from the Greatest Videos I have Ever seen Thank you! Very Much! 🙃🙃🙃🙃🙃🙃😊

  • @camomile4085
    @camomile4085 Месяц назад

    Thank you so much for sharing your valuable lesson free. Wishing you continued success and growth in your career!

  • @papadenj8262
    @papadenj8262 Месяц назад

    This got very enjoyable at the end when I exported it as csv😁Thanks for this man

  • @paullemaron5258
    @paullemaron5258 10 месяцев назад +1

    Hey Alex, I am so proud of the amazing job you are doing, thank you for the amazing project, I am studying for a job interview tomorrow and I know I will ace it coz Alex is my teacher.

    • @markchinwike6528
      @markchinwike6528 10 месяцев назад

      Hello. How did it go with the interview? Just to help us transition into the industry.

    • @paullemaron5258
      @paullemaron5258 10 месяцев назад

      @@markchinwike6528 Hello sir, I had the interview and it was a success, It majorly focused on SQL and the skills here are more than enough. I have the second interview in two weeks from now.

  • @Autoscraping
    @Autoscraping 11 месяцев назад

    A fabulous video that has been of great help in orienting our new collaborators. Your generosity is highly valued!

  • @MarciaRibeiro-gd1wx
    @MarciaRibeiro-gd1wx 5 месяцев назад +2

    You are perfect Alex. I loved this video! Thanks a lot.

  • @margotonik
    @margotonik 10 месяцев назад

    I loved this!!! Very good practice I enjoyed working in this project including the mistakes. Is always good to know that having errors doesn't make myself an idiot and is part of the process. Thank you so much for everything Alex I am sure we all love you as well!!

  • @jeet611_
    @jeet611_ Год назад +2

    Thanks alot Alex it helped me alot to explore this Webscraping and thanks for making this interesting and on point

  • @leonardnewbill793
    @leonardnewbill793 Год назад +1

    Super excited to finish the lesson! Thank you sir. I appreciate it!

  • @whitey9933
    @whitey9933 11 месяцев назад

    Thanks for the tutorial,
    Was always told not to add to a dataframe row by row (probably slower for much larger data),
    so I appended to a list and created a Dataframe off that - pd.DataFrame(company_list, columns=world_table_titles).set_index(['Rank'])

  • @naimmomin5811
    @naimmomin5811 Год назад

    So I just had this one question and this is at 12:27 -> Even if you were to switch the soup.find_all('th') to table.find_all('th'). Shouldnt it return the same thing as the last one. Since all the tables are from the same class? and they all also use for the headers

  • @sgntsids
    @sgntsids 6 месяцев назад

    Going through this series for a personal project, such wonderful content! For the class tags, it seems like when there's a space, bs4 ignores the 2nd "part". For instance, in my project I'm seeing the element and I just need to ignore the "list-unstyled" part for the soup.find to work.
    Didn't read through all the comments here so you might have already figured that out and shared, but wanted to comment anyway. Cheers!

  • @boeingpete
    @boeingpete 10 месяцев назад

    Excellent. Great video. Everything explained clearly and in a way I could follow. Thanks so much.

  • @dhanienugroho4323
    @dhanienugroho4323 Год назад +1

    Thanks for the tutorial! I just found the channel and I like the way you explain it!

  • @gabinkundwa7215
    @gabinkundwa7215 Год назад +1

    Thank you Alex, I am new to web scrapping and this video was helpful to me! Keep the good work!

    • @gameaddict3068
      @gameaddict3068 Год назад

      Check out my chanel for nice web scraping tools

  • @DevunuriPrabhu
    @DevunuriPrabhu 7 дней назад

    Excellent video easy to understand step by step clear explanation wow super. if everyone explain like this, I'm sure everyone going to be coding side. make another video on dynamic HTML..

  • @artemboichenko743
    @artemboichenko743 Год назад +16

    Hi Alex! Super helpful video, thank you! One detail though: Growth index is not always positive. We may see in the wiki table negative and positive values are present in that column. Instead of using ‘-‘ for negative value, that table uses small triangles. Could you show us how to manage that - to convert those triangles into positive or negative values accordingly?

    • @ridanaeem1012
      @ridanaeem1012 Год назад

      hey, any workaround for this?

    • @pawledz
      @pawledz 11 месяцев назад

      I am sure that there is a better way to handle this, but this will work:
      df = pd.DataFrame(columns = world_table_titles)
      df
      column_data = table.find_all('tr')
      for row in column_data[1:]:
      row_data = row.find_all('td')
      row_table_data = [data.text.strip() for data in row_data]
      if row.find_all('span')[1]['title'] == 'Decrease':
      row_table_data[4] = "-" + row_table_data[4]
      length = len(df)
      df.loc[length] = row_table_data

  • @Vikash-the-analyst
    @Vikash-the-analyst 7 месяцев назад

    Honestly, very informative and this help me very well to learn this topic. Explanation of every code is very useful. Thanks for making this informative video.

  • @cityoflaredoopendatadivisi9197
    @cityoflaredoopendatadivisi9197 6 месяцев назад

    very helpful video. love the troubleshooting as you go, and simple explanation of how you're working through this. thank you.

  • @JananiTeklurSrinivasa
    @JananiTeklurSrinivasa Год назад +1

    Thank You so so much for this video, Alex! It was super useful and easy to follow!

  • @raphael.dev13
    @raphael.dev13 Год назад +10

    Hey Alex!
    Thanks for the great video as always!
    Could you do a video on the repercussions and impact on the Data Analyst career now that OpenAI released their GPT Code interpreter?

  • @iSky950
    @iSky950 11 месяцев назад

    Very nice video Alex thanks for sharing! (I love that it's "live" and you make mistakes too, it's more human this way!)

  • @olajideayeola9534
    @olajideayeola9534 2 месяца назад

    Thank you Alex!! The playlist was very helpful.

  • @ZeeshanAli-ds1tm
    @ZeeshanAli-ds1tm 10 месяцев назад

    A question. How we can scrape 'td' and 'th' at the same time within same tbody < tr tags.

  • @chuene2666
    @chuene2666 5 месяцев назад

    This man is a life saver😭😭... Thank you sir❤️❤️

  • @ibrahimmohamoudbile3424
    @ibrahimmohamoudbile3424 Год назад

    You’re a ‘God sent’ my g

  • @blackwidow2899
    @blackwidow2899 Год назад

    Wow, Alex I totally enjoyed this. You make it so easy to understand. Now I need to go through your pandas tutorial and learn data manipulation. Thanks for being there!

  • @Web.Scraping
    @Web.Scraping 3 месяца назад

    Nicely explained and very simple 👍, but for someone who has little understanding of programming, it can be a problem. For example, I collect this data in a few clicks 😉

  • @YourYTHUB
    @YourYTHUB Год назад +2

    Hey Alex, thank you so much for ur effort,,,its a really super helpful series 🙏

  • @vamshikrishnareddyLingam
    @vamshikrishnareddyLingam 7 месяцев назад

    one word Beautiful video it actually helped to get the client

  • @yoshitamanavi530
    @yoshitamanavi530 Год назад

    I just have one comment, You are the best Alex 🤩

  • @gabrielledatascience
    @gabrielledatascience 8 месяцев назад

    If anyone is having issues around 13:31 when we state the dtaaframe columns, try adding
    , dtype='object'
    after world_table_titles so that the data type of the column headers can be set. mine had that issue and thought that I could share :)

  • @dynamickaushik2568
    @dynamickaushik2568 Месяц назад

    Hello Alex Sir,
    1. First of all, your work and teaching skills are quite remarkable. You make the learning process easy and smooth, which is also helping numerous learners.
    2. I am following through the whole process side by side but at the end, the number of rows and columns becomes (400,7) when I apply a df.shape function. On the other hand , when I look closely you have only (100,7). I need some guidance on that. Please resolve my issue.
    3. Eagerly waiting for your reply.
    Thank You.

  • @vishnupkumar2395
    @vishnupkumar2395 Год назад +1

    Hi,
    One quick question: instead of all this we can simple copy-paste the content. right?

  • @MudassarAli-bx2pf
    @MudassarAli-bx2pf Год назад

    Excellent Work Sir!!! I really Appreciated your work believe me You are a great mentor!

  • @ashutoshranjan4644
    @ashutoshranjan4644 7 месяцев назад

    I like your way of teaching. Looking forward to learn from you.
    Thanks for making such content

    • @japhethmutuku8508
      @japhethmutuku8508 5 месяцев назад

      Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!

  • @Photoshop729
    @Photoshop729 Год назад +2

    So far on my web scraping journey I don’t know if web scraping is any faster than just manual copy paste unless you have repeated scrape requests of the same site or structure

  • @ezhankhan1035
    @ezhankhan1035 10 месяцев назад

    Really helpful, thanks! You explain this muuuuch better than in the IBM Python Course haha.

    • @matrixnepal4282
      @matrixnepal4282 10 месяцев назад

      brother, did 'th' worked in you case? while i was doing it, it shows all the numbering in th too. I will really appreciate you help if you reply

    • @ezhankhan1035
      @ezhankhan1035 10 месяцев назад

      ​@@matrixnepal4282Did you do table.find_all('th')? I think Alex also made a similar mistake initially by doing soup.find_all('th'). Should be ON the 'table'

  • @proud_indian0161
    @proud_indian0161 7 месяцев назад

    Great Tutorial, Got what i was looking for thanks

  • @kuiwang3614
    @kuiwang3614 8 месяцев назад +1

    fantastic lesson, very clear

  • @pritamlaskar7265
    @pritamlaskar7265 Год назад +1

    Thank you so much! Very clear and well explained!

  • @data_surgeon
    @data_surgeon 5 месяцев назад +1

    Always been helpful. Bless you❤

    • @japhethmutuku8508
      @japhethmutuku8508 5 месяцев назад

      Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!

  • @alex_t_jones
    @alex_t_jones 4 месяца назад

    for anyone else who may have ran into the same issue of the table find/find_all not looking the same, here's what happened. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table. So instead, I had to use index 0 to get the correct table. Hope this helps!

  • @anthonygordon5052
    @anthonygordon5052 Год назад

    Thanks for the videos as usual Alex !

  • @tsubame1412
    @tsubame1412 8 месяцев назад

    Thanks, this video is really helpful for me at this moment !

  • @southafricangamer7174
    @southafricangamer7174 Месяц назад

    Hi Alex,
    What is your approach to dealing with Offset=? for web scraping? Sometimes a webpage that would have a div that has numerous pages so to say.
    Thanks.

  • @ibikunleadekiitan9882
    @ibikunleadekiitan9882 Год назад

    Thanks Alex for making me a great value to the world

  • @outhouse.wholesaler
    @outhouse.wholesaler Год назад +1

    What if Wikipedia had split the list over two pages with page 1 & 2 hyperlink buttons at the bottom of the page. How do you get python to click those links and continue scraping on page 2?

  • @oanhkieunguyen156
    @oanhkieunguyen156 Год назад

    Thanks so much for this video! I firstly understand the principle and the way to scrap data :)

  • @anuradhamondal1601
    @anuradhamondal1601 Год назад

    02:26 lol.. as a beginner to this and already overwhelmed with all information i recently learned, it is exactly what i would had thought!

  • @abraham_o
    @abraham_o 4 месяца назад

    Your teaching method is great I do not deny that, but this is exhausting to watch.

  • @YouTubeVenJiX-zl4bj
    @YouTubeVenJiX-zl4bj Год назад +7

    Sir you are a real hero 🤗

  • @ebamybass19
    @ebamybass19 Год назад +2

    Thank you Alex Frebeg ❤❤

  • @Mvjesty23
    @Mvjesty23 Год назад +1

    I’m going to do this today! Thank you Alex 😄

  • @ajibadeabdulateef2818
    @ajibadeabdulateef2818 Год назад +1

    Let me start by thanking you for all the tutorials in this playlist, they are totally worth my time. thank you. what would be the reason why I have double of the data on my own part I am having 200 instead of 100 data

    • @kayleighmacdonald
      @kayleighmacdonald Год назад

      I had this error too - every time you run the 'for' loop, it adds all the rows to the dataframe again. Be sure that the dataframe is empty, and only run the for loop once before importing to CSV.

  • @qlintdwayne9044
    @qlintdwayne9044 11 месяцев назад +1

    Anyone else stuck at 21:00 no matter what index used it still brings up mismatched rows error?

    • @qlintdwayne9044
      @qlintdwayne9044 11 месяцев назад

      getting zero rows, with 63 columns, can't figure out where I went wrong

    • @agbelayiesther142
      @agbelayiesther142 4 месяца назад

      Me too stuck there

    • @8BallPower
      @8BallPower 3 месяца назад

      @qlintdwayne9044 also stuck here

  • @cosmicstrays6864
    @cosmicstrays6864 Год назад +1

    Hi Alex, thanks a lot. Can you tell me how to do web scrapping where inspect option in chrome disabled ?

  • @88oscuro
    @88oscuro 5 месяцев назад

    I have issues scraping tables that include href text. Meaning instead of just getting the the individual word, I also scrape every word that the href contains.
    Anyone got a solution on this problem?

  • @ayushsinghrawat1409
    @ayushsinghrawat1409 7 месяцев назад

    I hands on to my 1st scrapping experience with your sir

  • @NewLeaf88
    @NewLeaf88 Год назад +2

    Hi Alex, could you explain how I could create a file that could loop through a given URL to extract data from multiple pages?

  • @Larocaxx
    @Larocaxx Год назад

    We love you too Alex ♥ thank you for such great videos

  • @facuceaglio1351
    @facuceaglio1351 Месяц назад +1

    hey alex, i had a problem in the very end, idk why excel saw the numbers as decimal numbers, so Instead of 161000 it appeared 161, the only solution I found was to put a cell above and write this:
    df["Employees"] = df["Employees"].astype(str).str.replace(",", ".")
    df["Revenue (USD millions)"] = df["Revenue (USD millions)"].astype(str).str.replace(",", ".")
    I hope it helps someone, thanks you very much for this bootcamp. i sent you a hug from argentina

  • @jmc1849
    @jmc1849 9 месяцев назад

    Hi Alex (as if!)
    Thanks for all the content

  • @cbacca2999
    @cbacca2999 7 месяцев назад

    Hi Alex. In the Wikipedia revenue table there is a minus sign in some of the revenue rows. This is actually an extended ascii n-dash or m-dash which will appear as another character. Look for a funky character in those rows in the output. I work in the print industry and this is an inappropriate use of the n- or m-dash for us.

  • @martinbolio257
    @martinbolio257 7 месяцев назад

    Very very useful! Great video.

  • @millenia2222
    @millenia2222 4 месяца назад

    Very good and easy to follow video, recommened

  • @thememeguy7630
    @thememeguy7630 Год назад +2

    Hello Alex! Just finished your Data Analyst Bootcamp, will you be doing Data Science Bootcamp in the future? Thanks!

    • @dywa_varaprasad
      @dywa_varaprasad Год назад

      can you please share your thoughts for newbies who's just getting ready for this

    • @thememeguy7630
      @thememeguy7630 Год назад

      @@dywa_varaprasad It's pretty good. You can start by viewing the Data Analyst Bootcamp Playlist in his channel.

  • @Jt277277
    @Jt277277 Год назад +2

    Thank you Alex for such a great lecture. But i have a question: while the length of the df will go from 0,1,2,3... every time we loop, so why can't we use df.iloc for index slicing instead of using df.loc? Thank you

    • @millenniumkitten4107
      @millenniumkitten4107 Год назад

      I was curious about that so I tried it and got the error: IndexError("iloc cannot enlarge its target object")

  • @moviesprobe6220
    @moviesprobe6220 Год назад +1

    Much needed video ❤