Honestly I love that you include your missteps in your tutorials for several reasons. It makes coding seem more human, it also shows us that even content creators and great programmers can have missteps that they need to go back and fix which is usually edited out of other tutorial videos. Not to mention there might be people having the same issues without understanding why and you explain it so its almost a mini tutorial on debugging and your programmer thought process. Overall it was an easy 25 minutes to spend watching this. Thank you.
Yes, I agree 100%. After following the video from beginning to the end, I finally figured out to get same results. What made it more challenging for me was that the 1. website html has changed and 2. content of table we are scraping was updated. So, to find the table, I change index of 1 to 2, still not getting right table, so I changed it from 2 to 0. After learning the thought process and getting the right table, I spoil myself and asked ChatGPT. and ChatGPT's code was much better for scraping but as you said it is not as human with mistakes and we learn from the mistake.
It's really good to see the different ways you go about solving problems and how you stay so calm about it Alex! The webpage has changed now so, soup.find('table') now finds the correct table. But knowing how to find it when we might need to use an index in future is really helpful.
12:21 I literally stopped when i couldn't figure out why i was getting extra titles when i pulled the titles. I'm so glad that you showed your Rookie mistake. Everyone please watch Alex's videos in full before stopping the video. Thank you for showing your mistakes.
In fact, YOUR approach is the correct way of solving such issues! Trying to figure out the error on your own is the ACTUAL learning taking place! Always try for yourself first, before you have a look at the solution. Otherwise you might fall victim to the fake-learning trap.
Alex: when I needed to learn SQL for my first analyst job as a career changer, you were there with videos to help me do so. Now I'm in a role that is using more python and once again, you're there! Really appreciate all the work you are putting into creating content to help people!
Last year I got a job as a BI Analyst and I've been watching your stuff here and there. This video is hands down one of the best videos I've watched of yours. I had to take multiple tables, pivot them, and label them with the table name and this video 100% helped me get there. I had run into my own set of issues, but not far removed from your sections of mistakes, so thank you for not letting those hit the cutting room floor. Anyway, keep up the great work and thanks so much!
I'm so glad you make mistakes and show us where to check if something goes wrong! It's my main problem when I have to work on my own after a tutorial, I mess up and don't ever know where to start to clean up my mess.
Alex, for those folks that are running this example currently, it appears that they removed the first table so the index has moved from [1] to [0]. (@ 8:42) Great job on this class. Love it!!!
I had struggled with learning web scraping for a long time and had nearly given up, but your video made all the difference. Thanks to your clear and effective guidance, I finally succeeded. I truly appreciate it!
Thank you, I learnt basics of python yesterday(had learnt C+ 8 yrs back so it was easy to relate) and I am a mechanical engineer but want to get into Product. This video was useful to learn and will modify it for other websites hopefully. Thanks again!
for anyone else who may have ran into the same issue. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table so instead had to use index 0 to get the correct table.
Just finished google data analyst certification, you about to help me make my portfolio look phat with scraping my own data before I do my whole hypothesis and data vis
Hi Alex, thank you a lot for all the videos. I'm currently doing a change of career to data analyst, and you are giving me more than just a little help with all your courses. Thanks for all
was following the tutorial and decided to do something 'crazy'. I appended all the 'individual_row_data' to a new list and used pd.DataFrame(data=full_data, columns= table_headers) Thank you for the tutorial Alex :)
Alex, please accept my deepest gratitude for the time and effort you have put into this entire series. Your method is clear and easy to follow in real time, and your unique feature of keeping moments of uncovering errors and looking for solutions is invaluable. I may speak for many of your viewers in sharing that it carries a strong message that errors happen and they can be fixed. You teach us to think through the code, not apply it mechanically.
This was one of my FAVORITE projects in your series so far! It was SUPER interesting and HELPFUL/USEFUL. I can see using this info for many future projects. P.S. I LOVE that you included the "rooky mistake" because that is definitely something I would do and then NOT be able to figure out for an hour. These included "mistakes" are such valuable lessons for people in your audience like me. :) P.P.S. I really appreciate how you summarize what we do in each video/project at the end. It's these extra details that make your instruction = A+, not just an A. Also, thank you for including the index = False. As always, THANK YOU ALEX!! You ROCK!
I saw all the videos for this playlist and I am getting to this last one, I haven't felt so happy to learn in a while, thank you for your work and help!
Thank you for this video with a extremely clear explanation. I always wonder why my college professors can't explain something as clearly as some people on RUclips can.
I found out why the class names were different. It seems to be a common issue. Someone explained it on Stack Overflow, "The table class wikitable sortable jquery-tablesorter does not appear when navigating the website until the column is sorted. I was able to grab exactly one table by using the table class wikitable sortable."
Thank you for doing this Alex. I learned a lot and followed along while watching this series so that I could learn how to do this as well. Now all I need to do is practice, practice, practice.
At 7:33, I think the reason for getting a NoneType object as output is because we're using the find() with index [1]. find_all() would probably make more sense if we're sticking with the index, since find() is supposed to return only the first instance i.e. index[0]. Nevertheless, excellent content as always! Truly appreciate the efforts!
Thanks for the tutorial, Was always told not to add to a dataframe row by row (probably slower for much larger data), so I appended to a list and created a Dataframe off that - pd.DataFrame(company_list, columns=world_table_titles).set_index(['Rank'])
I'm done with the tutorial today and end with awesome successful, i'm facing some trouble since i use different site but yeah, my scraping going well! Thank you so much!
Going through this series for a personal project, such wonderful content! For the class tags, it seems like when there's a space, bs4 ignores the 2nd "part". For instance, in my project I'm seeing the element and I just need to ignore the "list-unstyled" part for the soup.find to work. Didn't read through all the comments here so you might have already figured that out and shared, but wanted to comment anyway. Cheers!
Hey Alex, I am so proud of the amazing job you are doing, thank you for the amazing project, I am studying for a job interview tomorrow and I know I will ace it coz Alex is my teacher.
@@markchinwike6528 Hello sir, I had the interview and it was a success, It majorly focused on SQL and the skills here are more than enough. I have the second interview in two weeks from now.
Thanks, Alex! This was a really helpful lesson and project. This helped me get a better understanding of web scrapping and restructuring the data. Now, I feel confident in applying this to a project I've been working on.
Nicely explained and very simple 👍, but for someone who has little understanding of programming, it can be a problem. For example, I collect this data in a few clicks 😉
for anyone else who may have ran into the same issue of the table find/find_all not looking the same, here's what happened. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table. So instead, I had to use index 0 to get the correct table. Hope this helps!
Hello Alex Sir, 1. First of all, your work and teaching skills are quite remarkable. You make the learning process easy and smooth, which is also helping numerous learners. 2. I am following through the whole process side by side but at the end, the number of rows and columns becomes (400,7) when I apply a df.shape function. On the other hand , when I look closely you have only (100,7). I need some guidance on that. Please resolve my issue. 3. Eagerly waiting for your reply. Thank You.
If anyone is having issues around 13:31 when we state the dtaaframe columns, try adding , dtype='object' after world_table_titles so that the data type of the column headers can be set. mine had that issue and thought that I could share :)
Honestly, very informative and this help me very well to learn this topic. Explanation of every code is very useful. Thanks for making this informative video.
Nice tutorial, but there are AI tools now like Kadoa that can do all of this for you. In the time it takes for you to watch this video, you can get an AI scraper up and running.
Hi Alex! Super helpful video, thank you! One detail though: Growth index is not always positive. We may see in the wiki table negative and positive values are present in that column. Instead of using ‘-‘ for negative value, that table uses small triangles. Could you show us how to manage that - to convert those triangles into positive or negative values accordingly?
I am sure that there is a better way to handle this, but this will work: df = pd.DataFrame(columns = world_table_titles) df column_data = table.find_all('tr') for row in column_data[1:]: row_data = row.find_all('td') row_table_data = [data.text.strip() for data in row_data] if row.find_all('span')[1]['title'] == 'Decrease': row_table_data[4] = "-" + row_table_data[4] length = len(df) df.loc[length] = row_table_data
Hey Alex! Thanks for the great video as always! Could you do a video on the repercussions and impact on the Data Analyst career now that OpenAI released their GPT Code interpreter?
I loved this!!! Very good practice I enjoyed working in this project including the mistakes. Is always good to know that having errors doesn't make myself an idiot and is part of the process. Thank you so much for everything Alex I am sure we all love you as well!!
Hi Alex. In the Wikipedia revenue table there is a minus sign in some of the revenue rows. This is actually an extended ascii n-dash or m-dash which will appear as another character. Look for a funky character in those rows in the output. I work in the print industry and this is an inappropriate use of the n- or m-dash for us.
So far on my web scraping journey I don’t know if web scraping is any faster than just manual copy paste unless you have repeated scrape requests of the same site or structure
Let me start by thanking you for all the tutorials in this playlist, they are totally worth my time. thank you. what would be the reason why I have double of the data on my own part I am having 200 instead of 100 data
I had this error too - every time you run the 'for' loop, it adds all the rows to the dataframe again. Be sure that the dataframe is empty, and only run the for loop once before importing to CSV.
Excellent video, thanks. Note: jquery classes are added at runtime and executed by browser, so they wont appear in the direct response coming from the server.
hey alex, i had a problem in the very end, idk why excel saw the numbers as decimal numbers, so Instead of 161000 it appeared 161, the only solution I found was to put a cell above and write this: df["Employees"] = df["Employees"].astype(str).str.replace(",", ".") df["Revenue (USD millions)"] = df["Revenue (USD millions)"].astype(str).str.replace(",", ".") I hope it helps someone, thanks you very much for this bootcamp. i sent you a hug from argentina
@@matrixnepal4282Did you do table.find_all('th')? I think Alex also made a similar mistake initially by doing soup.find_all('th'). Should be ON the 'table'
Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!
Wow, Alex I totally enjoyed this. You make it so easy to understand. Now I need to go through your pandas tutorial and learn data manipulation. Thanks for being there!
56/74! I'm almost there Alex) Ty for your hard work. It is a really helpful bootcamp. But I have only one question for you. Why are you still a Data Analyst and are not going to be a Data Science or Data Engineer?
Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!
Honestly I love that you include your missteps in your tutorials for several reasons. It makes coding seem more human, it also shows us that even content creators and great programmers can have missteps that they need to go back and fix which is usually edited out of other tutorial videos. Not to mention there might be people having the same issues without understanding why and you explain it so its almost a mini tutorial on debugging and your programmer thought process. Overall it was an easy 25 minutes to spend watching this. Thank you.
Exactly😁
Yes, I agree 100%. After following the video from beginning to the end, I finally figured out to get same results. What made it more challenging for me was that the 1. website html has changed and 2. content of table we are scraping was updated. So, to find the table, I change index of 1 to 2, still not getting right table, so I changed it from 2 to 0. After learning the thought process and getting the right table, I spoil myself and asked ChatGPT. and ChatGPT's code was much better for scraping but as you said it is not as human with mistakes and we learn from the mistake.
👏👍
Hello how are you doing
It's really good to see the different ways you go about solving problems and how you stay so calm about it Alex! The webpage has changed now so, soup.find('table') now finds the correct table. But knowing how to find it when we might need to use an index in future is really helpful.
12:21 I literally stopped when i couldn't figure out why i was getting extra titles when i pulled the titles. I'm so glad that you showed your Rookie mistake. Everyone please watch Alex's videos in full before stopping the video. Thank you for showing your mistakes.
In fact, YOUR approach is the correct way of solving such issues!
Trying to figure out the error on your own is the ACTUAL learning taking place!
Always try for yourself first, before you have a look at the solution. Otherwise you might fall victim to the fake-learning trap.
Alex: when I needed to learn SQL for my first analyst job as a career changer, you were there with videos to help me do so. Now I'm in a role that is using more python and once again, you're there! Really appreciate all the work you are putting into creating content to help people!
Can you tell me that this playlist is useful for analyst
Of course @@--Manoj007
Hey how are you doing
Last year I got a job as a BI Analyst and I've been watching your stuff here and there. This video is hands down one of the best videos I've watched of yours.
I had to take multiple tables, pivot them, and label them with the table name and this video 100% helped me get there. I had run into my own set of issues, but not far removed from your sections of mistakes, so thank you for not letting those hit the cutting room floor.
Anyway, keep up the great work and thanks so much!
I'm so glad you make mistakes and show us where to check if something goes wrong! It's my main problem when I have to work on my own after a tutorial, I mess up and don't ever know where to start to clean up my mess.
Alex, for those folks that are running this example currently, it appears that they removed the first table so the index has moved from [1] to [0]. (@ 8:42) Great job on this class. Love it!!!
my mind is blown after watching the whole video i didnt imagine this could be done by python.i have to watch it again!what a person you are Alex!
Hey how are you doing
I had struggled with learning web scraping for a long time and had nearly given up, but your video made all the difference. Thanks to your clear and effective guidance, I finally succeeded. I truly appreciate it!
Hey 👋 how are you doing
Thank you, I learnt basics of python yesterday(had learnt C+ 8 yrs back so it was easy to relate) and I am a mechanical engineer but want to get into Product. This video was useful to learn and will modify it for other websites hopefully. Thanks again!
for anyone else who may have ran into the same issue. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table so instead had to use index 0 to get the correct table.
Just finished google data analyst certification, you about to help me make my portfolio look phat with scraping my own data before I do my whole hypothesis and data vis
Hi Alex, thank you a lot for all the videos. I'm currently doing a change of career to data analyst, and you are giving me more than just a little help with all your courses. Thanks for all
same
@@sarurajendran5762 same
Same!
was following the tutorial and decided to do something 'crazy'.
I appended all the 'individual_row_data' to a new list and used
pd.DataFrame(data=full_data, columns= table_headers)
Thank you for the tutorial Alex :)
Alex, please accept my deepest gratitude for the time and effort you have put into this entire series. Your method is clear and easy to follow in real time, and your unique feature of keeping moments of uncovering errors and looking for solutions is invaluable. I may speak for many of your viewers in sharing that it carries a strong message that errors happen and they can be fixed. You teach us to think through the code, not apply it mechanically.
This was one of my FAVORITE projects in your series so far! It was SUPER interesting and HELPFUL/USEFUL. I can see using this info for many future projects.
P.S. I LOVE that you included the "rooky mistake" because that is definitely something I would do and then NOT be able to figure out for an hour. These included "mistakes" are such valuable lessons for people in your audience like me. :) P.P.S. I really appreciate how you summarize what we do in each video/project at the end. It's these extra details that make your instruction = A+, not just an A. Also, thank you for including the index = False. As always, THANK YOU ALEX!! You ROCK!
FACTS 100%
I saw all the videos for this playlist and I am getting to this last one, I haven't felt so happy to learn in a while, thank you for your work and help!
Hello 🤩 how are you doing
Thank you for this video with a extremely clear explanation. I always wonder why my college professors can't explain something as clearly as some people on RUclips can.
your way of teaching is best honestly there are lots of youtube channels with lots of courses but i like the way you teachs ❤
I found out why the class names were different. It seems to be a common issue. Someone explained it on Stack Overflow,
"The table class wikitable sortable jquery-tablesorter does not appear when navigating the website until the column is sorted. I was able to grab exactly one table by using the table class wikitable sortable."
ty
Hey how are you doing
Completely quick, efficient and clear, really appreciate your effort and content Alex ! Thank You !
Thank you for doing this Alex. I learned a lot and followed along while watching this series so that I could learn how to do this as well. Now all I need to do is practice, practice, practice.
At 7:33, I think the reason for getting a NoneType object as output is because we're using the find() with index [1]. find_all() would probably make more sense if we're sticking with the index, since find() is supposed to return only the first instance i.e. index[0]. Nevertheless, excellent content as always! Truly appreciate the efforts!
Honestly, I love to create the project and learnings, thank u so much
Thanks for the tutorial,
Was always told not to add to a dataframe row by row (probably slower for much larger data),
so I appended to a list and created a Dataframe off that - pd.DataFrame(company_list, columns=world_table_titles).set_index(['Rank'])
I'm done with the tutorial today and end with awesome successful, i'm facing some trouble since i use different site but yeah, my scraping going well!
Thank you so much!
dude it's awesome ! just keep teaching. short, empty of long stories, useful and update data! that's all i want always.
Going through this series for a personal project, such wonderful content! For the class tags, it seems like when there's a space, bs4 ignores the 2nd "part". For instance, in my project I'm seeing the element and I just need to ignore the "list-unstyled" part for the soup.find to work.
Didn't read through all the comments here so you might have already figured that out and shared, but wanted to comment anyway. Cheers!
Hey Alex, I am so proud of the amazing job you are doing, thank you for the amazing project, I am studying for a job interview tomorrow and I know I will ace it coz Alex is my teacher.
Hello. How did it go with the interview? Just to help us transition into the industry.
@@markchinwike6528 Hello sir, I had the interview and it was a success, It majorly focused on SQL and the skills here are more than enough. I have the second interview in two weeks from now.
Thanks, Alex!
This was a really helpful lesson and project. This helped me get a better understanding of web scrapping and restructuring the data. Now, I feel confident in applying this to a project I've been working on.
Nicely explained and very simple 👍, but for someone who has little understanding of programming, it can be a problem. For example, I collect this data in a few clicks 😉
This got very enjoyable at the end when I exported it as csv😁Thanks for this man
for anyone else who may have ran into the same issue of the table find/find_all not looking the same, here's what happened. In the inspect for the website it counted that top citations section as a table, but when I extracted that into the jupyter notebook it didn't count this as a table. So instead, I had to use index 0 to get the correct table. Hope this helps!
Thank you so much for sharing your valuable lesson free. Wishing you continued success and growth in your career!
You made this wayyyy easier than I thought it would be! Worth a sub from me sir!
Thanks alot Alex it helped me alot to explore this Webscraping and thanks for making this interesting and on point
the way i was waiting for this video😂..thank you Alex
This was from the Greatest Videos I have Ever seen Thank you! Very Much! 🙃🙃🙃🙃🙃🙃😊
This man is a life saver😭😭... Thank you sir❤️❤️
Hello Alex Sir,
1. First of all, your work and teaching skills are quite remarkable. You make the learning process easy and smooth, which is also helping numerous learners.
2. I am following through the whole process side by side but at the end, the number of rows and columns becomes (400,7) when I apply a df.shape function. On the other hand , when I look closely you have only (100,7). I need some guidance on that. Please resolve my issue.
3. Eagerly waiting for your reply.
Thank You.
A fabulous video that has been of great help in orienting our new collaborators. Your generosity is highly valued!
very helpful video. love the troubleshooting as you go, and simple explanation of how you're working through this. thank you.
If anyone is having issues around 13:31 when we state the dtaaframe columns, try adding
, dtype='object'
after world_table_titles so that the data type of the column headers can be set. mine had that issue and thought that I could share :)
Hello how are you doing
Honestly, very informative and this help me very well to learn this topic. Explanation of every code is very useful. Thanks for making this informative video.
Nice tutorial, but there are AI tools now like Kadoa that can do all of this for you. In the time it takes for you to watch this video, you can get an AI scraper up and running.
Hey how are you doing
Super excited to finish the lesson! Thank you sir. I appreciate it!
Hi Alex! Super helpful video, thank you! One detail though: Growth index is not always positive. We may see in the wiki table negative and positive values are present in that column. Instead of using ‘-‘ for negative value, that table uses small triangles. Could you show us how to manage that - to convert those triangles into positive or negative values accordingly?
hey, any workaround for this?
I am sure that there is a better way to handle this, but this will work:
df = pd.DataFrame(columns = world_table_titles)
df
column_data = table.find_all('tr')
for row in column_data[1:]:
row_data = row.find_all('td')
row_table_data = [data.text.strip() for data in row_data]
if row.find_all('span')[1]['title'] == 'Decrease':
row_table_data[4] = "-" + row_table_data[4]
length = len(df)
df.loc[length] = row_table_data
I hands on to my 1st scrapping experience with your sir
Hey Alex!
Thanks for the great video as always!
Could you do a video on the repercussions and impact on the Data Analyst career now that OpenAI released their GPT Code interpreter?
You are perfect Alex. I loved this video! Thanks a lot.
:D
Hey how are you doing
@@johnhudson9558 hi
You’re a ‘God sent’ my g
Excellent. Great video. Everything explained clearly and in a way I could follow. Thanks so much.
one word Beautiful video it actually helped to get the client
I loved this!!! Very good practice I enjoyed working in this project including the mistakes. Is always good to know that having errors doesn't make myself an idiot and is part of the process. Thank you so much for everything Alex I am sure we all love you as well!!
Hey 👋 how are you doing
Your teaching method is great I do not deny that, but this is exhausting to watch.
Very nice video Alex thanks for sharing! (I love that it's "live" and you make mistakes too, it's more human this way!)
Thank you Alex!! The playlist was very helpful.
Thanks for the tutorial! I just found the channel and I like the way you explain it!
Hi Alex. In the Wikipedia revenue table there is a minus sign in some of the revenue rows. This is actually an extended ascii n-dash or m-dash which will appear as another character. Look for a funky character in those rows in the output. I work in the print industry and this is an inappropriate use of the n- or m-dash for us.
So far on my web scraping journey I don’t know if web scraping is any faster than just manual copy paste unless you have repeated scrape requests of the same site or structure
Let me start by thanking you for all the tutorials in this playlist, they are totally worth my time. thank you. what would be the reason why I have double of the data on my own part I am having 200 instead of 100 data
I had this error too - every time you run the 'for' loop, it adds all the rows to the dataframe again. Be sure that the dataframe is empty, and only run the for loop once before importing to CSV.
I just have one comment, You are the best Alex 🤩
Thank you Alex Frebeg ❤❤
Excellent video, thanks.
Note: jquery classes are added at runtime and executed by browser, so they wont appear in the direct response coming from the server.
fantastic lesson, very clear
02:26 lol.. as a beginner to this and already overwhelmed with all information i recently learned, it is exactly what i would had thought!
Hey ☺️ how are you doing
Hey Alex, thank you so much for ur effort,,,its a really super helpful series 🙏
hey alex, i had a problem in the very end, idk why excel saw the numbers as decimal numbers, so Instead of 161000 it appeared 161, the only solution I found was to put a cell above and write this:
df["Employees"] = df["Employees"].astype(str).str.replace(",", ".")
df["Revenue (USD millions)"] = df["Revenue (USD millions)"].astype(str).str.replace(",", ".")
I hope it helps someone, thanks you very much for this bootcamp. i sent you a hug from argentina
Thank you Alex, I am new to web scrapping and this video was helpful to me! Keep the good work!
Check out my chanel for nice web scraping tools
Sir you are a real hero 🤗
Truly! ❤
Thank You so so much for this video, Alex! It was super useful and easy to follow!
Great Tutorial, Got what i was looking for thanks
Thanks Alex for making me a great value to the world
Really helpful, thanks! You explain this muuuuch better than in the IBM Python Course haha.
brother, did 'th' worked in you case? while i was doing it, it shows all the numbering in th too. I will really appreciate you help if you reply
@@matrixnepal4282Did you do table.find_all('th')? I think Alex also made a similar mistake initially by doing soup.find_all('th'). Should be ON the 'table'
I like your way of teaching. Looking forward to learn from you.
Thanks for making such content
Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!
Excellent Work Sir!!! I really Appreciated your work believe me You are a great mentor!
Hi! I know it's not a pandas tutorial, but anyway, pandas can parse html by itself. Just pass your table to pandas.read_html() function.
Hey ☺️👋 how are you feeling
Thank you so much! Very clear and well explained!
A question. How we can scrape 'td' and 'th' at the same time within same tbody < tr tags.
Thanks, this video is really helpful for me at this moment !
Perfect 🫶❤
Wow, Alex I totally enjoyed this. You make it so easy to understand. Now I need to go through your pandas tutorial and learn data manipulation. Thanks for being there!
Hey 👋 how are you doing
Hi Alex (as if!)
Thanks for all the content
17:51 I thought you're like every other guy , But you are special Alex
We love you too Alex ♥ thank you for such great videos
56/74! I'm almost there Alex) Ty for your hard work. It is a really helpful bootcamp. But I have only one question for you. Why are you still a Data Analyst and are not going to be a Data Science or Data Engineer?
I really salute your work . Thank you.
Always been helpful. Bless you❤
Hello! I can see you are interested in learning how to scrape websites. I can help you get better at it. Let me know if you’d like more details or if you have any questions!
Thank you sir for making it more east
thank you bro, it is soo understable
Simply Wow!!! handsoff!
I’m going to do this today! Thank you Alex 😄
Yes
Hey ☺️ how are you doing
Zeus Proxy facilitates seamless SEO monitoring and data scraping, enabling users to gather valuable insights.
fantastic way of explaining things
I am really like your project! I appreciated you
Much needed video ❤
Very very useful! Great video.
Thanks for the videos as usual Alex !
This is a fun project. Thanks for this.
Hey ☺️👋 how are you feeling
Thank you, Alex.
Alex , you are great .