hi Dr. Pi, just checked out your applications. That's so cool! I like the LED setting concept. You could even use that for setting up an alarm or signals when prices reach certain levels.
@@eMasterClassAcademy Yes, I had a buzzer attached, and it buzzed every 30 secs when it displayed a different stock. If I was to redo it, I'd probably get a red and a blue LED and flash the price on red if price had dropped, blue if it had gone up. I had it plugged in and CRON made the script start up at 8am when the stock exchange opened.
Thanks for the tutorial! I do have a question, I am able to scrape and generate the .csv file. However, when I run the animate script, theres no realtime animation, it seems to just plot the current time/price in the csv file. This occurs for both jupyter notebook and spyder. Did anyone else encounter this? style.use('fivethirtyeight') fig, ax = plt.subplots() def animate(i): df = pd.read_csv('test.csv') ys = df.iloc[1:, 2].values xs = list(range(1, len(ys) + 1)) ax.clear() ax.plot(xs, ys) ax.set_title("btc") ani = animation.FuncAnimation(fig, animate, interval = 1000) plt.tight_layout() plt.show()
Thanks for your sharing. I was just wondering whether you are running both programmes at the same time. On one hand, 1) you scrape and save updates. On the other hand, 2) you read saved updates and generate plot. And then you keep these two processes, i.e. 1) and 2), continue running. But I applied it in PyCharm, so, I am really not sure whether it has problems in Jupyter. So I do appreciate your feedback and hope it helps others.
Can the website server shut down by fetching data from this websites in x second. I am thinking if the users are too much request in every second to this stock website, which can lead to server down? I am working a kind of software project like this and thinking future performance
if you scrape from internet a number can be a string, because it is written text you copied. you can easily convert it. a variable can hold a 2 or a '2'.
I am having same problem and it's problem for many of us who tried to apply this tutorial i.e. 'NoneType' object has no attribute 'find'. Please suggest
i have one doubt, why are there websites in which we can see a class named 'something' inside inspect element but cant search it in source code? are some things hidden in page source code??
Thanks for watching. There are some updates in the backend and HTML code in Yahoo Finance. So, we need to revise some codes accordingly. Please check out my latest video for web scraping - ruclips.net/video/A9Rj77CKpJ8/видео.html Hope this helps.
Please help, I cant do this on jupyter notebook. It always collect the data first, and show the plot after that. Can we do real time plotting on jupyter notebook?
The main difference is about speed. In general, lxml is faster than the html.parser. However, speed should not the main criteria for determining which parser that you are going to use. You should focus on the HTML document, and use the praser that works in your particular websites. Because sometimes, one works while the others give you "none" result. So, I will often try 1) lxml, 2) html.parser and 3) html5lib. Details can be found in Beautiful Soup website. www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-between-parsers
Thanks for your comment. I was thinking it's not possible, because we need to combine the "for loop"/"while loop" in the first program and "while loop" (animate) in the second program. Or something like "break the program" and "restart" it again? Seems not that straight forward.
Yes, this is a good API. My whole point here is to show web scraping, because some websites are useful, but they don't offer any API. Also, if we need more frequent interval, like less than 1 min data. the yfinance library can't help. But I agree with you, yfinance is a good substitute.
hey i got it going, scraping, saving the csv but when i plot the data my graph only rises, goes up, even if the next value is smaller than the previous one... i checked like a 1000 times your code and mine comparing, and i can't find what the hell is wron lol, if someone knows what's going on, please let me know :)
@Jamie Cliffe hey! yeah i figured that out eventually researching it in other places... but thanks, your comment will definitely help others who might have this issue! 😊
i got a problem while try to use the BeautifulSoup(r.text, 'lxml') Traceback (most recent call last): File "D:/Roullete/roulletegraph.py", line 9, in web_content = BeautifulSoup(r.text, 'lxml') File "D:\Roullete\venv\lib\site-packages\bs4\__init__.py", line 243, in __init__ raise FeatureNotFound( bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library? does anyone know why?
Thank you for your patience. Our engineers are working quickly to resolve the issue. I am watching in April 2023 and it seems like yahoo don't want people to scrape their site. the request end with the two P tags above, and there is no element to scrape when you do r.text.
I did the same for a website. However problem with this is process time getting more and more.. So it takes data later. Few minutes later lag is becoming like 15-20 seconds. Has anyone a solution for that ?
I have quite a big problem with this script, because it does not update the data while running which is wierd and I don't know why it happens, Can anyone help me?
@@KulikovValery Hey man, glad you found my comment, I solved the issue using completely different method. I used a library called yahoo_fin and from that library I used module called stock_ info There is a method where you can get live prices of stocsk that are of your interest in one line of code.
great video! In the video, you pulled price for HK only. Yet when you ran the application, it pulled prices for CLP, CKH, HSBC etc. How do you run multiple prices in the same column? I tried doing stock_code = "CHK", "HK" etc but i got errors
Hi, what are the stock that you are looking for? What I am doing is to use stock code ['0001', '0002', '0003', '0005'] to represent different stocks, and then append them into the column one by one respectively, then append them into row one by one according to the timestamp. The most difficult parts when you are using the beautiful soup to do web scrapping are 1) identifying the correct link and 2) identifying the correct ''tag'/div'.
@@eMasterClassAcademy thanks. i was able to get that part. but my question is, how do you associate the different symbols for his, when you do 0001, 0002, 0003, and 0004 with the different stock code
Hi, very often, if you find the wrong 'div' / tag, then you will get the "NoneType". So I suggest you keep trying different 'div', especially for those upper level tag.
I am having same problem and it's problem for many of us who tried to apply this tutorial i.e. 'NoneType' object has no attribute 'find'. This problem occurred when i created the function. Before that I was able to fetch the data without any hassle.Please suggest what may not be working right with us.
@@eMasterClassAcademy yeah but for Eddie: You should be careful about where you plot the data, provided from your for-loop and while-loop. Those also have to be 2 separate "locations". or if you want to write this stuff to a file, use different files to which you want to write. Writing to the same target file, you'd need to specify exactly when and what and where it should go and how your file format should be. This is tedious work and needs synchronous work between for-loop and while-loop with the help of true/false flags. In example, you only write the content that is provided from your for-loop functionality to the file, if the flag of the while-loop is set to false, then you set the for-loop flag to false, which triggers the while loop to go into "true"-mode and provided a specific functionality write something different in A NEW LINE AT LEAST with hints that the written content comes from the while-loop. then you set the flag in the while loop to flase, which inturn could trigger the for-loop. But those are very complex control flow mechanisms, that need careful implementation and knowledge of the code. I'd personally would look out for existing examples of such things on Github NOT ON StackOverflow. Because Githubb-Scripts more often than StackOverflow yield fullfledged implementations and not only snippets.
In the code, ani=animation.FuncAnimation(fig, animate, interval = 1000), you can try to reduce the 1000 to print it faster. But it seems that the data that you scrap is not fast enough to match the update. But you can try matching the time duration for data updating and plotting to make it printing faster and better. Thanks
Hi, I am using Jupyter Notebook to run the files, however, I am only getting a horizontal blue line being displayed on each subplot. I have the files opened separately on jupyter. What could possibly be the issue? The code seems correct
Why would you not code it, while watching a video? What do you expect to learn from just taking his code and using it for yourself instead of understanding it? DO you even know how to set up your computer so you can run it? I never get those comments saying "Give me the code." Blunt.
I was building this from scratch and trying to figure out a way to do this im glad i found this video
This is exactly what I am looking for. Excellent!
Pulling out data and automatically plotting them, love it!
I know it is kinda randomly asking but do anybody know of a good website to watch newly released movies online?
@Clark Nikolai Try flixzone. You can find it by googling =)
@Tomas Scott Yup, I have been using flixzone for since april myself :D
@Tomas Scott thanks, I went there and it seems like a nice service :D I appreciate it!
@Clark Nikolai you are welcome xD
Thank you. I will use this concept in my personal project.
Great tutorial!
To change searches with common Western symbols/stocks (MSFT, AAPL): change "+ stock_code + ('?p=') + stock_code + ('&.tsrc=fin-srch')"
Great point to facilitate others to tailer-made their own portfolio. I hope you don't mind if I pin your comment for others' reference. Thanks.
@@eMasterClassAcademy My pleasure.
thanku, at last found something very helpful. thankyou thank you thank you
Excellent work!
Very interesting video. A bit over my head. I'll stick with yfinance for now. I will download the video and study it some more.
Nice video - I did similar last year with a Rapsberry Pi and had the prices display on LED matrix !
hi Dr. Pi, just checked out your applications. That's so cool! I like the LED setting concept. You could even use that for setting up an alarm or signals when prices reach certain levels.
@@eMasterClassAcademy Yes, I had a buzzer attached, and it buzzed every 30 secs when it displayed a different stock.
If I was to redo it, I'd probably get a red and a blue LED and flash the price on red if price had dropped, blue if it had gone up.
I had it plugged in and CRON made the script start up at 8am when the stock exchange opened.
The visualisation is cool!
Pretty cool.......
Great tutorial
Thank you!
Incredible, keep doing more tutorials
beautiful shup❤️
Thanks for the tutorial! I do have a question, I am able to scrape and generate the .csv file. However, when I run the animate script, theres no realtime animation, it seems to just plot the current time/price in the csv file. This occurs for both jupyter notebook and spyder. Did anyone else encounter this?
style.use('fivethirtyeight')
fig, ax = plt.subplots()
def animate(i):
df = pd.read_csv('test.csv')
ys = df.iloc[1:, 2].values
xs = list(range(1, len(ys) + 1))
ax.clear()
ax.plot(xs, ys)
ax.set_title("btc")
ani = animation.FuncAnimation(fig, animate, interval = 1000)
plt.tight_layout()
plt.show()
Thanks for your sharing. I was just wondering whether you are running both programmes at the same time. On one hand, 1) you scrape and save updates. On the other hand, 2) you read saved updates and generate plot.
And then you keep these two processes, i.e. 1) and 2), continue running.
But I applied it in PyCharm, so, I am really not sure whether it has problems in Jupyter. So I do appreciate your feedback and hope it helps others.
Use %matplotlib notebook...
how can I check for the correlation between the tweets and the stock prices?
Can the website server shut down by fetching data from this websites in x second. I am thinking if the users are too much request in every second to this stock website, which can lead to server down? I am working a kind of software project like this and thinking future performance
The animation 4 plots at the begining looks very good , but that looks very different from the results plots at the end of this video?
please help when I am trying to scrap content, the output is showing "none"
I wanted to get a graph for history of the stock, but I'm stuck at the inspection step. I've got the right block but how do I get the values in it?
why you have used df.T?
Why we need to transpose it?
Good content. But please: don't make me sleep in your class again.
This is an interesting project!
While ploting graph I am getting error " value error: could not convert string to float" though the value I am getting in csv file is integer .
if you scrape from internet a number can be a string, because it is written text you copied. you can easily convert it.
a variable can hold a 2 or a '2'.
I am having same problem and it's problem for many of us who tried to apply this tutorial i.e. 'NoneType' object has no attribute 'find'. Please suggest
Data is the new oil!
This is really interesting, It would be very helpful if you show how to send this scraped data to email when we reach a price level.
This is a extremely great suggestion! Really appreciate with your advice and will definitely try it out.
@@eMasterClassAcademy Thank you will be looking for it.
i have one doubt, why are there websites in which we can see a class named 'something' inside inspect element but cant search it in source code? are some things hidden in page source code??
THANKS
I would be very grateful if you could tell me why I always got same price, price no change at all after run the code
Thanks for watching.
There are some updates in the backend and HTML code in Yahoo Finance. So, we need to revise some codes accordingly.
Please check out my latest video for web scraping - ruclips.net/video/A9Rj77CKpJ8/видео.html
Hope this helps.
Please help,
I cant do this on jupyter notebook. It always collect the data first, and show the plot after that. Can we do real time plotting on jupyter notebook?
when i srapping yahoo finance it gives same values at all, but in the yahoo finance page, it is changing,
The same thing is happening. After re-running the script, the data is not updated.
Sir I m from India. Can you provide nse India or any broker Api development like this format. Thnx in advance
This is an amazing video made by you. But I have the doubt regarding HSI. If stock code is like ^NSEI, then what should be there in HSI list?
.NS example RELIANCE.NS
Quick question:
What's the difference between using 'html.parser' and 'lxml' in the BeautifulSoup function?
The main difference is about speed. In general, lxml is faster than the html.parser.
However, speed should not the main criteria for determining which parser that you are going to use. You should focus on the HTML document, and use the praser that works in your particular websites. Because sometimes, one works while the others give you "none" result.
So, I will often try 1) lxml, 2) html.parser and 3) html5lib.
Details can be found in Beautiful Soup website. www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-between-parsers
Sir how to make option chain analysis setup in python
Really Nice Work, is there a possibillity to combine the two programs into one?
Thanks for your comment. I was thinking it's not possible, because we need to combine the "for loop"/"while loop" in the first program and "while loop" (animate) in the second program.
Or something like "break the program" and "restart" it again? Seems not that straight forward.
@@eMasterClassAcademy Multitreading or Multiprocessing should be a solution for that task.
You could use yfinance library to get prices
Yes, this is a good API.
My whole point here is to show web scraping, because some websites are useful, but they don't offer any API.
Also, if we need more frequent interval, like less than 1 min data. the yfinance library can't help.
But I agree with you, yfinance is a good substitute.
it still shows me nonetype has no attribute find
hey i got it going, scraping, saving the csv but when i plot the data my graph only rises, goes up, even if the next value is smaller than the previous one... i checked like a 1000 times your code and mine comparing, and i can't find what the hell is wron lol, if someone knows what's going on, please let me know :)
@Jamie Cliffe hey! yeah i figured that out eventually researching it in other places... but thanks, your comment will definitely help others who might have this issue! 😊
i got a problem while try to use the BeautifulSoup(r.text, 'lxml')
Traceback (most recent call last):
File "D:/Roullete/roulletegraph.py", line 9, in
web_content = BeautifulSoup(r.text, 'lxml')
File "D:\Roullete\venv\lib\site-packages\bs4\__init__.py", line 243, in __init__
raise FeatureNotFound(
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?
does anyone know why?
try changing the 'lxml' into 'html.parser'
Thank you for your patience.
Our engineers are working quickly to resolve the issue.
I am watching in April 2023 and it seems like yahoo don't want people to scrape their site. the request end with the two P tags above, and there is no element to scrape when you do r.text.
this code doesnt work, it says animate() takes 0 positional arguments but 1 was given
What about the source code? Or should everyone type it themselves?
Animate doesn't support 3.8 version what I do
I did the same for a website. However problem with this is process time getting more and more.. So it takes data later. Few minutes later lag is becoming like 15-20 seconds. Has anyone a solution for that ?
I have quite a big problem with this script, because it does not update the data while running which is wierd and I don't know why it happens, Can anyone help me?
The same thing is happening. After re-running the script, the data is not updated.
@@KulikovValery Hey man, glad you found my comment, I solved the issue using completely different method. I used a library called yahoo_fin and from that library I used module called stock_ info
There is a method where you can get live prices of stocsk that are of your interest in one line of code.
Thank you for your class! Do you come from Hong Kong?
Nice to meet you. Yes, I am from HK.
great video! In the video, you pulled price for HK only. Yet when you ran the application, it pulled prices for CLP, CKH, HSBC etc. How do you run multiple prices in the same column? I tried doing stock_code = "CHK", "HK" etc but i got errors
Hi, what are the stock that you are looking for?
What I am doing is to use stock code ['0001', '0002', '0003', '0005'] to represent different stocks, and then append them into the column one by one respectively, then append them into row one by one according to the timestamp.
The most difficult parts when you are using the beautiful soup to do web scrapping are 1) identifying the correct link and 2) identifying the correct ''tag'/div'.
@@eMasterClassAcademy
I'm using T, AAPL, KO and XOM. I can only seem to pull the data for T. I'll rewatch your video and see if I can get it.
Thanks
@@Tehguy1 Please see Robert Navarro's comment. I pinned it. He kindly provided the changes in code for Western symbols/stocks. Wish it helps.
@@eMasterClassAcademy thanks. i was able to get that part. but my question is, how do you associate the different symbols for his, when you do 0001, 0002, 0003, and 0004 with the different stock code
at @19:18 why you have use index 1 in iloc?
Nice video interesting!!! I am having a problem with web_content.find as an AttributeError: "NoneType" object has no attribute "find" ??
Hi, very often, if you find the wrong 'div' / tag, then you will get the "NoneType". So I suggest you keep trying different 'div', especially for those upper level tag.
I am having same problem and it's problem for many of us who tried to apply this tutorial i.e. 'NoneType' object has no attribute 'find'. This problem occurred when i created the function. Before that I was able to fetch the data without any hassle.Please suggest what may not be working right with us.
saved my arse bro ty
how did you operate 2 py files at the same time?
I am using PyCharm. And since it's under a large "for loop" and "while loop", the two files can operate at the same time.
@@eMasterClassAcademy yeah but for Eddie: You should be careful about where you plot the data, provided from your for-loop and while-loop. Those also have to be 2 separate "locations". or if you want to write this stuff to a file, use different files to which you want to write. Writing to the same target file, you'd need to specify exactly when and what and where it should go and how your file format should be. This is tedious work and needs synchronous work between for-loop and while-loop with the help of true/false flags. In example, you only write the content that is provided from your for-loop functionality to the file, if the flag of the while-loop is set to false, then you set the for-loop flag to false, which triggers the while loop to go into "true"-mode and provided a specific functionality write something different in A NEW LINE AT LEAST with hints that the written content comes from the while-loop. then you set the flag in the while loop to flase, which inturn could trigger the for-loop. But those are very complex control flow mechanisms, that need careful implementation and knowledge of the code. I'd personally would look out for existing examples of such things on Github NOT ON StackOverflow. Because Githubb-Scripts more often than StackOverflow yield fullfledged implementations and not only snippets.
Can you please help me with options open interest data
is there a way of making it print info faster? thanks:3
In the code, ani=animation.FuncAnimation(fig, animate, interval = 1000), you can try to reduce the 1000 to print it faster. But it seems that the data that you scrap is not fast enough to match the update.
But you can try matching the time duration for data updating and plotting to make it printing faster and better.
Thanks
Hi, I am using Jupyter Notebook to run the files, however, I am only getting a horizontal blue line being displayed on each subplot. I have the files opened separately on jupyter. What could possibly be the issue? The code seems correct
Part 1 of pulling the data and storing it in a csv is working, but I don't think it is reading it from csv, even though the code is the same
Very good your work. Could you please make the code available?
Why would you not code it, while watching a video? What do you expect to learn from just taking his code and using it for yourself instead of understanding it? DO you even know how to set up your computer so you can run it? I never get those comments saying "Give me the code." Blunt.
@@meylaul5007 Sorry, it wasn't my intention to use your code, I just wanted to train by looking. Thanks.
interesting
pip3 from beautifulsoup4 import BeautifulSoup and from bs4 import BeautifulSoup
cant lissten to that schoup thing. its S! soup blin! omg
Which IDE are using??
Hi, I am using PyCharm
please share source code
You made something simple very confusing.
This video is incomplete and when you would code it would not work.
this web scrathing is not available for indian market. such as stockname.NS extension. RELIANCE.NS