Python Web Scraping Tutorial - Automate Stock Market Data Collection

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 35

  • @WillsJazzLoft
    @WillsJazzLoft 5 месяцев назад +4

    Hey Brandon, I'm a Python noob. Let me start off by giving you a hearty thanks for sharing your code. I'm using a Linux operating system or distro and the output rendered dramatically different from the way it does in Windows. I had to paste the output into a word processor to see precisely the same results that you had. By now, I am already familiar with for loops.However, I'll need to study lines 14 to 29 to fully comprehend that segment of the program. Thank you again

  • @SanjayRaut-l4v
    @SanjayRaut-l4v 9 месяцев назад +1

    you made it so simple, Thanks

  • @nhuthaonguyen-tw3hw
    @nhuthaonguyen-tw3hw Год назад +1

    Hi Brandon Harding, thank you very much for your video. This video helps me so much in learning how to scrape data from a web page. If it is ok, you make more a video save the data crawled into a Excel file. Have a nice day. 😀

  • @Niwa7
    @Niwa7 Год назад +2

    That's really a good start...clear and simple explanation..all the best..please do more projects in Python..

    • @brandonharding216
      @brandonharding216  Год назад +1

      Thank you! It’s my first one of these videos so it’s great to hear that.
      I have a couple more videos to make that will be a continuation of this video. The series after that will be about using ML to make predictions.

    • @brandonharding216
      @brandonharding216  Год назад

      Hey @sambecker6024! Sorry for the delay. I’ve been building a web application for a startup using Python Flask and am looking forward to making a video detailing this project.

  • @simosoikonomidis8506
    @simosoikonomidis8506 23 дня назад +1

    Hey, awesome video! Would like some insight into a problem I came across. My URLs end with (start=0&count=25 ,start=25&count=25....start=250&count=25) so I modified your code and when I printed the pages I did get a link for a different table. However, my final list contains only the 25 tr from the first count 11 times so 275 rows in total. I am only getting the first table 11 times but again my URLs lead to different tables. Would love some insight.

    • @brandonharding216
      @brandonharding216  23 дня назад

      It sounds like your code is continuously fetching the first page. You can try something like this to debug and verify that you're generating the correct URLs:
      for start in range(0, 275, 25):
      url = f"example.com/path?start={start}&count=25"
      print(url)

  • @willykitheka7618
    @willykitheka7618 Год назад +1

    Thank for sharing

  • @Mattie_LIGHT
    @Mattie_LIGHT 9 месяцев назад +1

    Thanks. I can’t believe I actually got this to work. How do I put the info in a cover file?

    • @brandonharding216
      @brandonharding216  8 месяцев назад +1

      I had to look up what a cover file is. If you'd like the data in a csv file, you can use a library like 'pandas' to export the data to a csv.

  • @JasonWhite-r9d
    @JasonWhite-r9d 8 месяцев назад +1

    Hay I'm new to coding and idk if it's me but the code only seems to work on the website in the video, any help would be appreciated thx

    • @brandonharding216
      @brandonharding216  8 месяцев назад +1

      Good question! This code scrapes the contents of an HTML table with class "tabMini tabQuotes". If you use another website, you'll need to inspect the HTML of that new website and identify the id, class, etc. of the elements that contain the data you'd like to scrape.

  • @johnclay7422
    @johnclay7422 Год назад +1

    Hello Sir! I am sure you are enjoying full health. I am new to python learning but very interested please guide me how can i begin .. thanks sir

    • @brandonharding216
      @brandonharding216  Год назад

      Hi there! I started out about a year ago by making a basic "to do" web app in Flask (there are tons of tutorials online). That will give you a solid foundation to build upon. I might make a short intro video if that's something you'd be interested in.

    • @johnclay7422
      @johnclay7422 Год назад

      @@brandonharding216 Thanks sir .. sure i will find those videos whenever i need help you will be a beacon for me.. thanks alot for your support.

  • @aineshbalaga1216
    @aineshbalaga1216 5 месяцев назад +1

    So does this scrape through the entire list of stocks listed on the nasdaq? And also to make your output a lot more visible , can we store this info on a csv or let’s I wanted to create another leaderboard for a seperate website of my own how can I be able to do that?

    • @brandonharding216
      @brandonharding216  5 месяцев назад

      This code will scrape all the data from the financial website in the video. This output can absolutely be saved to a csv (another video coming soon).

    • @brandonharding216
      @brandonharding216  5 месяцев назад

      Creating a separate website that scraps this data is more involved but can be done. I would recommend using Flask (Python) for the website backend.

  • @markmaher-9712
    @markmaher-9712 Год назад +1

    thanks how can I scrap this page automatically when the values in this page change should I run it in a server or something

    • @brandonharding216
      @brandonharding216  Год назад +1

      It really depends on what your end goal is. If you'd like to create a web app and deploy it to the cloud , Celery is a great option (docs.celeryq.dev/en/stable/getting-started/introduction.html).

    • @markmaher-9712
      @markmaher-9712 Год назад +1

      @@brandonharding216 I want to run a bot that when I receive a telegram message from any channel the pot automatically copies it and send it to another telegram user if you can help me in this how to keep the Python bot listening for any new messages

  • @debojyotidebnath4703
    @debojyotidebnath4703 Год назад +1

    Extract the price, date and stock name from the tradingview platform after every left click and make a log file in CSV or excel format. is it possible

    • @brandonharding216
      @brandonharding216  Год назад +2

      Anything is possible! This sounds like it would require some combination of JavaScript and Python.

    • @debojyotidebnath4703
      @debojyotidebnath4703 Год назад

      If you can help me with some solution based on this it will be of great help or make a video out of it, this will be of great help@@brandonharding216

  • @excelling_excel
    @excelling_excel Год назад +2

    Is web scraping of free/open public data legal?

    • @brandonharding216
      @brandonharding216  Год назад

      Data that is freely available to the public and doesn't require authentication is typically considered fair game for web scraping. But always check the terms of service on the website you are planning on scraping.

  • @sohamdutta1395
    @sohamdutta1395 7 месяцев назад +1

    why is it that i am getting only one company ?

    • @brandonharding216
      @brandonharding216  7 месяцев назад

      Can you share your code in a GitHub repository?