Can you create a pandas dataframe with the OHLCV values of just 1 S&P 500 or NASDAQ100 company in Python, and then append the df with ALL the indicators / oscillators / candlesticks from TA, TA-Lib, pandas-TA, and FinTA; then make the df display the % change every day (using 9 day 12 day 26 day 50 75 100 and 200 day windows); and then append the df with the #1 - #9 highest performing indicator / oscillator / candlestick NAME (not the percent) every week / month / 3 months / 6 months / year? So that you're tracking what technical indicator(s) is / are winning the most?
Everytime a youtuber tries predicting the next day's price using the prev day's price, it conclusively proves that they've no freaking idea about how ML works.
Indeed, I've seen numerous videos and tutorials of LSTM models that perfectly predict the future prices. However, it just predicts the last value and therefore it's as you mentioned, it's a simple lag(1) model. One way to solve this issue is making sure that the data is stationary. One way to do that is predicting the log of the returns instead of the prices.
@@larenlarry5773 Basically using LSTM to predict stocks is just bullshit. If it is that simple, no one lose money. One reason behinds is the choice of loss function. either using L1/L2 loss implies the model would try to predict a value close to the actual value. In stock data, the yesterday value should be the closest value to today's value (usually). That's why when LSTM predicts today value, the value is just very similar to the yesterday actual value.
You have an irregularly sampled time series (so under this approach t-7 for one row may actually be 9 days prior). I realize opening that can of worms gets into a whole niche-y area rife with salami slicing publications. But would've been really great to see it addressed with a carry forward or something.
So, irregularly sampled time series is ok, or you have to handle it by forward filling? I have also heard that if its irregular, you have to keep the time column, but if its uniform, then you can drop the time column.
Great content here Greg! I had so much to learn from this video, specifically as I coded it along with your video. I also happened to play around with the model architecture and the inputs in terms of trying out a bidirectional LSTM, GRU, increasing sequence length, and by extending the input features by incorporating other columns. Thank you again!
I don't get why people always use unpredictable numbers like stock prices and sunspots to demonstrate neurol networks. You can't tell how good or bad the results are. It makes much more sense to use predictable data so we know which model works better for which types of data.
🎯 Key Takeaways for quick navigation: 00:00 🌟 *Introduction to LSTM stock forecasting with PyTorch* - Overview of the tutorial's goal to teach LSTM stock forecasting using PyTorch. - Mention of key libraries and tools: pandas, numpy, matplotlib, and PyTorch. 02:02 📊 *Data Preparation and Analysis* - Loading and examining Amazon's stock history data, focusing on the closing value. - Explanation of stock value adjustments like splits to maintain comparison standards with other companies. 04:08 🔧 *Preparing Data for LSTM Input* - Transformation of the dataset to include historical closing values for prediction. - Setup for using GPUs in PyTorch for model training and explanation of data preprocessing steps, including normalization. 06:25 💻 *LSTM Model Setup and Training Preparation* - Detailed walkthrough of setting up the LSTM model in PyTorch, including creating custom dataset classes and data loaders. - Explanation of splitting the dataset into training and testing sets, and preparing the data for the LSTM model with appropriate reshaping and normalization. 16:49 🤖 *LSTM Model Configuration and Initialization* - Explanation of LSTM model structure, including input size, hidden layers, and the fully connected layer. *- Focus on closing value as the single feature for prediction.* *- Use of a single stacked LSTM layer to avoid overfitting.* 19:34 🛠️ *Training Loop Setup and Execution* - Setup for training and validation loops, including specifying learning rate, epochs, and the mean squared error loss function. *- Introduction of custom functions for training and validation processes.* *- Discussion on the importance of loss function choice and optimizer settings.* 24:09 📉 *Prediction and Plotting* - Generating predictions from the trained model and plotting against actual values. *- Process for converting model predictions back to original scale for meaningful comparison.* *- Visualization of model performance on training data.* 28:32 🔍 *Evaluation and Final Thoughts* - Evaluation of model performance on test data and final remarks on stock forecasting. *- Emphasis on the complexity and challenges of accurate stock prediction.* *- Advice against over-reliance on model predictions for stock trading decisions.* Made with HARPA AI
If you only scale the X data and not the y data, the predictions will be in normal scale and there is no need to perform inverse transform on y_pred. 😀
Take care when doing this for neural network-based ML problems. NNs benefit from label scaling significantly because they output predictions in a range determined by their final activation function. E.g. if you are using a hyperbolic tan activation function, the output will be in range [-1,1], and if your label vector isn't within that range then the model will fail to converge.
Hello, great tutorrial! Since you made a comment on the inversion: To avoid the workaround with the dummies on the inversion, you will need to create 2 different scalers, one for X and one for y. Then you can separately inverse the scale without the need to create the dummy value matrix. Something like this: mm_scaler_x = MinMaxScaler(feature_range=(-1, 1)) mm_scaler_y = MinMaxScaler(feature_range=(-1, 1)) orig_dataset = create_feature_set_df(ds.df['Close'].to_frame(), LOOBACK_STEPS).to_numpy(dtype=np.float32) X_orig = orig_dataset[:, 1:].copy() X_orig = np.flip(X_orig, axis=1) y_orig = orig_dataset[:, 0].copy().reshape(-1, 1) X = mm_scaler_x.fit_transform(X_orig) y = mm_scaler_y.fit_transform(y_orig) Also, since the dataframes/numpy arrays do not contain objects, it is sufficient to use the numpy or dataframe copy functions (no need for deepcopy). Thank you for your great videos! I learn so much from them!
thank you Greg! I'm curious if you include more parameters in X( training datasets), for example 5 parameters instead of 1 parameter, but also look back 7 days, how to reshape your data input (X_Train) structure? thanks!
Hi Greg, great content! Just wanted to say that the win-rate is more useful to test if the model is any good, you can calculate the winrate by simply counting how many times the predicted direction (up/down) is correct
Thank you for this tutorial. However, I was wondering whether there was a possibility of data leaking from training to testing given that you scaled all the data and then split it.
Yes, there is. You should fit the scaler on training data, transform the training data, then directly transform the test data without re-fitting the scaler.
Im getting a little confused in how would you apply te model to actually predict days ahead in the future, since in this LSTM the future days are not in the dataframe. I imagine a non trivial implementation so the model takes always the last days available. Could anyone give a hand with that?
I'm also wondering about this. Given the lookback, the model should be able to predict the lookback days in the future. How can I implement the model to find the predicted price target?
so how does the graph work? how do I test the data for future? I don't have the actual future data, this makes sense fir backtesting, but what about for forecasting?
@19:20 Can somebody explain to me what out[:, -1, :] does. I'm trying to learn the burn crate for Rust which is young and doesn't have enough documentation so I'm stuck referencing pytorch which is it's influence.
Hello Greg, nice video its really helping to understand deeply LSTM and PyTorch, I have a one question. If we need to add more than one features to predict what we need to do on lookback ?
Thank you so much I was really struggling figuring out how to format the data to feed into an lstm model in pytorch, this really helped conceptualize it.
Everything worked until you run the batch process... Running on the most current version of Python 3.11... This is the error it shows : "NotImplementedError: Module [LSTM] is missing the required "forward" function" Getting the same error on the Colaboratory notebook as well...? Thanks for clearing that up in advance... -ER x
Thank you very much for your video, I have a question: In your video, it seems that the data only predicts one point in the future, if I want to predict 100 points or more in the future, what do I do?
Why do we need to do the min-max scaling to the data if there is only one feature? Also, why is it necessary that we create a custom dataclass? Can you elaborate on that?
Keep it up Greg! I really enjoy your videos and your way of teaching is way better than i ever could do and I am in my PhD, reach out to me if you like to share more ideas, I have some ideas that I will like to run it by you.
As soon as you start run your model on X_test => model(X_test.to(device)).detach().cpu().numpy().flatten() dont you have the Lags in the test data resulting in a information leakage?
Thanks for the tutorial, really helpful. If I run it on G_colab it is working but not on my local machine. It will always error out on the validation function with the error: For unbatched 2-D input, hx and cx should also be 2-D but got (3-D, 3-D) tensors. Do you have any idea why?
Hi Greg, what a great video! I wonder if I have another type of time series, say youtuber's income with new videos before they uploading it. Could I build a prediction model for all youtubers with 1 model only like yours, or I have to build one for each? And If I would need only 1 model, how do I achieve it? Will the youtubers' name be in the input?
How would you build this model if you had more than one input? Like Close and Volume. Instead of having a 1x7 matrix youd have a 2x7 matrix. How would you throw this into the model?
My uneducated guess: the LSTM definition includes input size, change that to 2. Normalize the volume data to [-1,1] as was done for the price data. Create Volume sequences the same way as the Price sequences and use this as training data. Since only Price is predicted, no change to the Y (ground truth) vector is needed. This is an important question in real world scenarios as Volume is a strong indicator of movement and momentum.
Hello. I am an Engineering student, I am developing a project where I have data from 167 patients. For each patient I have a dataframe with 60 columns (characteristics) and 5000 rows. Each row corresponds to a time of 60 seconds. I cannot put the data of all the patients together in a single dataframe and randomly extract a percentage to train and test. What I want to do is pass that to a CNN or LSTM but take into account that they are different patients, I thought I should fix that in a three-dimensional matrix where the depth is the patients, but I don't know if that is correct and I don't know either. how to do it. I also have the ID of each patient but I don't know how to use that information. Each patient dataframe has a column at the end that is the target, the signal that I want to predict. Please could you help me and explain to me?
Hi Greg, I have watched a lot of videos about the specific topic and this is one of the greatest, especially of the way you presenting it. I have a similar problem and I would like to know if you can help me on how to modify your code or refer me to another source. I want to simulate an optimization algorithm which uses a timeseries to predict another one. I found the concept of using the last 7 observation extremely useful, but in my case it would be great if I can use the last outcome as input for the following prediction. Do you gave any ideas on that?
😀 it doesn’t fit at all I think you had to cut all the old small prices and keep only 5 or 6 years ago Traning the model on 1/2 $ to predict 100$ it’s not good at all Also I didn’t see any dense layer for aggregating outputs
Can you explain why every content on Tensor Flow is shitting image recognition? And if we want to build visualization , data pipeline, real time cluster and decision making ? Shall I go with Pytorch? I guess tensor flow either don't has utility for numbers?
If you are going to call this a tutorial (especially when using PyTorch which has granularity ) make sure to give the "complicated" explanations you skip so simply instead of saying "it looks like this" or "do that". After all, that's the purpose of a tutorial isn't it ?
I think this video should be re-done! The instructions are vague and the results are obviously erroneous, not just in strategy but in the implementation. Otherwise, great content and explanation
Cease producing videos on stock predictions as they may be misleading and primarily serve to boost viewership rather than provide valuable information.
@@GregHoggI suggest use proper use case for LSTM. Stock price prediction is not the one and someone may actually use it for making financial decisions. The actual financial assets forecasting is much much more complicated.
Take my courses at mlnow.ai/!
Can you create a pandas dataframe with the OHLCV values of just 1 S&P 500 or NASDAQ100 company in Python, and then append the df with ALL the indicators / oscillators / candlesticks from TA, TA-Lib, pandas-TA, and FinTA; then make the df display the % change every day (using 9 day 12 day 26 day 50 75 100 and 200 day windows); and then append the df with the #1 - #9 highest performing indicator / oscillator / candlestick NAME (not the percent) every week / month / 3 months / 6 months / year? So that you're tracking what technical indicator(s) is / are winning the most?
a bit late to the party, why not used yfinance?
It's not a prediction, it is simple lag.
Exactly.
Everytime a youtuber tries predicting the next day's price using the prev day's price, it conclusively proves that they've no freaking idea about how ML works.
Indeed, I've seen numerous videos and tutorials of LSTM models that perfectly predict the future prices. However, it just predicts the last value and therefore it's as you mentioned, it's a simple lag(1) model. One way to solve this issue is making sure that the data is stationary. One way to do that is predicting the log of the returns instead of the prices.
Im new to ML, would you mind to explain further?
@@larenlarry5773 Basically using LSTM to predict stocks is just bullshit. If it is that simple, no one lose money.
One reason behinds is the choice of loss function. either using L1/L2 loss implies the model would try to predict a value close to the actual value.
In stock data, the yesterday value should be the closest value to today's value (usually). That's why when LSTM predicts today value, the value is just very similar to the yesterday actual value.
You have an irregularly sampled time series (so under this approach t-7 for one row may actually be 9 days prior). I realize opening that can of worms gets into a whole niche-y area rife with salami slicing publications. But would've been really great to see it addressed with a carry forward or something.
So, irregularly sampled time series is ok, or you have to handle it by forward filling? I have also heard that if its irregular, you have to keep the time column, but if its uniform, then you can drop the time column.
Such a clear explanation definitely the best channel for pytorch implementation , SUBSCRIBED
Glad you enjoyed it!
What timing, Greg! You just published a video I was looking for. Thanks a lot!
Not a coincidence, I read your mind!!
Great content here Greg! I had so much to learn from this video, specifically as I coded it along with your video. I also happened to play around with the model architecture and the inputs in terms of trying out a bidirectional LSTM, GRU, increasing sequence length, and by extending the input features by incorporating other columns. Thank you again!
I'm so happy to hear that :) yeah when you allow yourself to really play around with things you can learn a lot :)
I don't get why people always use unpredictable numbers like stock prices and sunspots to demonstrate neurol networks. You can't tell how good or bad the results are. It makes much more sense to use predictable data so we know which model works better for which types of data.
Thanks a lot Greg, all your videos on LSTMs are really helping with my Master-Thesis!
Super glad to hear it!
I'm here for my Bachelors Thesis :D Hope you successfully handed yours in!
@@jackikoch837 Yes, I graduated successfully last month! :)
Much success to you, too! :)
Thank you very much for the clear instructions!
Thanks to you, I launched my first neural network!
Greetings from Russia :)
Greetings! Glad to hear it 🙂🙂
🎯 Key Takeaways for quick navigation:
00:00 🌟 *Introduction to LSTM stock forecasting with PyTorch*
- Overview of the tutorial's goal to teach LSTM stock forecasting using PyTorch.
- Mention of key libraries and tools: pandas, numpy, matplotlib, and PyTorch.
02:02 📊 *Data Preparation and Analysis*
- Loading and examining Amazon's stock history data, focusing on the closing value.
- Explanation of stock value adjustments like splits to maintain comparison standards with other companies.
04:08 🔧 *Preparing Data for LSTM Input*
- Transformation of the dataset to include historical closing values for prediction.
- Setup for using GPUs in PyTorch for model training and explanation of data preprocessing steps, including normalization.
06:25 💻 *LSTM Model Setup and Training Preparation*
- Detailed walkthrough of setting up the LSTM model in PyTorch, including creating custom dataset classes and data loaders.
- Explanation of splitting the dataset into training and testing sets, and preparing the data for the LSTM model with appropriate reshaping and normalization.
16:49 🤖 *LSTM Model Configuration and Initialization*
- Explanation of LSTM model structure, including input size, hidden layers, and the fully connected layer.
*- Focus on closing value as the single feature for prediction.*
*- Use of a single stacked LSTM layer to avoid overfitting.*
19:34 🛠️ *Training Loop Setup and Execution*
- Setup for training and validation loops, including specifying learning rate, epochs, and the mean squared error loss function.
*- Introduction of custom functions for training and validation processes.*
*- Discussion on the importance of loss function choice and optimizer settings.*
24:09 📉 *Prediction and Plotting*
- Generating predictions from the trained model and plotting against actual values.
*- Process for converting model predictions back to original scale for meaningful comparison.*
*- Visualization of model performance on training data.*
28:32 🔍 *Evaluation and Final Thoughts*
- Evaluation of model performance on test data and final remarks on stock forecasting.
*- Emphasis on the complexity and challenges of accurate stock prediction.*
*- Advice against over-reliance on model predictions for stock trading decisions.*
Made with HARPA AI
Keep it up Greg! Enjoying this series very much 😊
Super glad to hear that 😊
If you only scale the X data and not the y data, the predictions will be in normal scale and there is no need to perform inverse transform on y_pred. 😀
Take care when doing this for neural network-based ML problems. NNs benefit from label scaling significantly because they output predictions in a range determined by their final activation function. E.g. if you are using a hyperbolic tan activation function, the output will be in range [-1,1], and if your label vector isn't within that range then the model will fail to converge.
Hello, great tutorrial!
Since you made a comment on the inversion:
To avoid the workaround with the dummies on the inversion, you will need to create 2 different scalers, one for X and one for y. Then you can separately inverse the scale without the need to create the dummy value matrix.
Something like this:
mm_scaler_x = MinMaxScaler(feature_range=(-1, 1))
mm_scaler_y = MinMaxScaler(feature_range=(-1, 1))
orig_dataset = create_feature_set_df(ds.df['Close'].to_frame(), LOOBACK_STEPS).to_numpy(dtype=np.float32)
X_orig = orig_dataset[:, 1:].copy()
X_orig = np.flip(X_orig, axis=1)
y_orig = orig_dataset[:, 0].copy().reshape(-1, 1)
X = mm_scaler_x.fit_transform(X_orig)
y = mm_scaler_y.fit_transform(y_orig)
Also, since the dataframes/numpy arrays do not contain objects, it is sufficient to use the numpy or dataframe copy functions (no need for deepcopy).
Thank you for your great videos! I learn so much from them!
Ah, yes that probably would have been a good idea. Thanks for providing this, I really appreciate it! Cheers :)
how can i use this to predict next week's prices?
you cant
thank you Greg! I'm curious if you include more parameters in X( training datasets), for example 5 parameters instead of 1 parameter, but also look back 7 days, how to reshape your data input (X_Train) structure? thanks!
Hi Greg, great content! Just wanted to say that the win-rate is more useful to test if the model is any good, you can calculate the winrate by simply counting how many times the predicted direction (up/down) is correct
Sounds good, I'll try that!
that is accuracy not winrate. Winrate is how much money you would have won (called backtesting)
Why is the batch loop 15:10 an enumeration just to throw away the integer? I'm not sure this guy knows what he's doing.
Thank you for this tutorial. However, I was wondering whether there was a possibility of data leaking from training to testing given that you scaled all the data and then split it.
Yes, there is. You should fit the scaler on training data, transform the training data, then directly transform the test data without re-fitting the scaler.
Im getting a little confused in how would you apply te model to actually predict days ahead in the future, since in this LSTM the future days are not in the dataframe. I imagine a non trivial implementation so the model takes always the last days available.
Could anyone give a hand with that?
I'm also wondering about this. Given the lookback, the model should be able to predict the lookback days in the future. How can I implement the model to find the predicted price target?
so how does the graph work?
how do I test the data for future? I don't have the actual future data, this makes sense fir backtesting, but what about for forecasting?
I have the same question and in several guides doesn't explain it :(
@19:20 Can somebody explain to me what out[:, -1, :] does. I'm trying to learn the burn crate for Rust which is young and doesn't have enough documentation so I'm stuck referencing pytorch which is it's influence.
Actually, it returns the output of the last Lstm cell
Hello Greg, nice video its really helping to understand deeply LSTM and PyTorch, I have a one question. If we need to add more than one features to predict what we need to do on lookback ?
Thank you so much I was really struggling figuring out how to format the data to feed into an lstm model in pytorch, this really helped conceptualize it.
Thank you very much! you are a life saver!!
The blind leading the blind 🙂
Could you use your methodology to identify candle stick patterns and assess their reliability in predicting future price direction?
Only one way to find out
No you can't this Methode only predicts the price from the past
Everything worked until you run the batch process...
Running on the most current version of Python 3.11...
This is the error it shows :
"NotImplementedError: Module [LSTM] is missing the required "forward" function"
Getting the same error on the Colaboratory notebook as well...? Thanks for clearing that up in advance...
-ER x
This is the best tutorial for LSTM with Pytorch!
Do you have a document that describes all of the terminology you’re using?
Thank you very much for your video, I have a question:
In your video, it seems that the data only predicts one point in the future, if I want to predict 100 points or more in the future, what do I do?
You could do that by recursively making the lastest prediction the latest input
Why do we need to do the min-max scaling to the data if there is only one feature? Also, why is it necessary that we create a custom dataclass? Can you elaborate on that?
Keep it up Greg! I really enjoy your videos and your way of teaching is way better than i ever could do and I am in my PhD, reach out to me if you like to share more ideas, I have some ideas that I will like to run it by you.
That would be great, thanks so much!
@@GregHogg thanks man, I dropped an inbox on your email, please check inbox/spam
Hi Greg, Does the MinMaxScaler you've done on the whole dataset cause information leakage?
yes it should do that , i was wondering the same thing
Yes, it would. You need to first split and then scale not scale and then split
Hi Greg, nice video!
Is there any risk of data leakage in your train and validation setup?
As soon as you start run your model on X_test => model(X_test.to(device)).detach().cpu().numpy().flatten() dont you have the Lags in the test data resulting in a information leakage?
Why there are only training and testing dataset? Is validation dataset necessary?
Are h0 and c0 the intial input and forget gate tensors?
Hi Greg, could you create a video how to predict stock prices with Transformer Neural Networks ?
Hi Greg, is it ok scaling the entire data? Bcz most of the time we do scale only the train set
Did you need to take the dataframe and put into numpy and then move to a tensor? Why not just go straight to a tensor?
I think he's said before its just his habit
That's impressive. Do you have any plans to upload a model that predicts using the CNN+LSTM(ConVLSTM) technique?
No, but maybe I should!
Another amazing guide 👌👏🙏
Thanks for the tutorial, really helpful. If I run it on G_colab it is working but not on my local machine. It will always error out on the validation function with the error: For unbatched 2-D input, hx and cx should also be 2-D but got (3-D, 3-D) tensors. Do you have any idea why?
Probably a pip package nightmare haha sorry about that
Hi Greg, what a great video! I wonder if I have another type of time series, say youtuber's income with new videos before they uploading it. Could I build a prediction model for all youtubers with 1 model only like yours, or I have to build one for each? And If I would need only 1 model, how do I achieve it? Will the youtubers' name be in the input?
How would you build this model if you had more than one input?
Like Close and Volume.
Instead of having a 1x7 matrix youd have a 2x7 matrix.
How would you throw this into the model?
My uneducated guess: the LSTM definition includes input size, change that to 2. Normalize the volume data to [-1,1] as was done for the price data. Create Volume sequences the same way as the Price sequences and use this as training data. Since only Price is predicted, no change to the Y (ground truth) vector is needed. This is an important question in real world scenarios as Volume is a strong indicator of movement and momentum.
Lol this video is the definition of "Trust me bro"
Hello. I am an Engineering student, I am developing a project where I have data from 167 patients. For each patient I have a dataframe with 60 columns (characteristics) and 5000 rows. Each row corresponds to a time of 60 seconds. I cannot put the data of all the patients together in a single dataframe and randomly extract a percentage to train and test. What I want to do is pass that to a CNN or LSTM but take into account that they are different patients, I thought I should fix that in a three-dimensional matrix where the depth is the patients, but I don't know if that is correct and I don't know either. how to do it. I also have the ID of each patient but I don't know how to use that information. Each patient dataframe has a column at the end that is the target, the signal that I want to predict. Please could you help me and explain to me?
The prediction result looks incorrect; If you look closely, as you can see in the last graph, the prediction is our actual with a shift of 1.
Yep sorry there was an error
Could you please fix it?
@@GregHogg
@@GregHogg so how do we stop the model from doing the shift thing? I'm having the same issue with a time series of energy prices.
Regardless of what value I put on the lookback, calling the function just gives me Close and Close(t-1) only.
I probably hard-coded a typo then
@@GregHogg Or I was the typo expert. Copied from Colab then all working fine. Thanks.
why are you shuffling the data by setting shuffle =True? in time series this isn't allowed right?
also by converting to tensors, you're losing precision - when data is already so closely spaced, losing precision is NOT a good idea
Where is the LSTM version of this video from tensorflow ?
It's... Somewhere!
Your features (Close) are reversed in time. Is it good for LSTM?
I reversed them
@@GregHogg Yes, yes, I didn't notice
X = dc(np.flip(X, axis=1))
Hi Greg, I have watched a lot of videos about the specific topic and this is one of the greatest, especially of the way you presenting it. I have a similar problem and I would like to know if you can help me on how to modify your code or refer me to another source. I want to simulate an optimization algorithm which uses a timeseries to predict another one. I found the concept of using the last 7 observation extremely useful, but in my case it would be great if I can use the last outcome as input for the following prediction. Do you gave any ideas on that?
what other videos do you suggest on this topic?
I think this is called Autoregression
How to assess its performance by MAE and MSE?
You can definitely calculate the Mae and mse
"np.zeros" is not defined. How can you fix it?
just to be sure... Check if you are import numpy as np
import numpy as np
Tell me you did look ahead bias without doin so.
How to implement this end to end in fastapi plz make a video
Hey Greg could you do something similar using chat gpt or an ai program
Very interesting!
How to prediction on gold chart
great video, buy why didn't you took advantage of that green screen? 😅
perhaps he won't his vid took by mr green for sample
Seems to me the plot will always look good because the previous close is already the input :(
I have done this. It is impossible to predict actually.
😀 it doesn’t fit at all
I think you had to cut all the old small prices and keep only 5 or 6 years ago
Traning the model on 1/2 $ to predict 100$ it’s not good at all
Also I didn’t see any dense layer for aggregating outputs
Let's goooooooooooo Greg!
Thank you!
Can you explain why every content on Tensor Flow is shitting image recognition? And if we want to build visualization , data pipeline, real time cluster and decision making ? Shall I go with Pytorch? I guess tensor flow either don't has utility for numbers?
10:34 Fix
Sorry?
Are you PewDiePie's brother?
I am not
So what you want to predict is the first row of the tensor?
If you are going to call this a tutorial (especially when using PyTorch which has granularity ) make sure to give the "complicated" explanations you skip so simply instead of saying "it looks like this" or "do that". After all, that's the purpose of a tutorial isn't it ?
I think this video should be re-done! The instructions are vague and the results are obviously erroneous, not just in strategy but in the implementation.
Otherwise, great content and explanation
Cease producing videos on stock predictions as they may be misleading and primarily serve to boost viewership rather than provide valuable information.
No
@@GregHoggI suggest use proper use case for LSTM. Stock price prediction is not the one and someone may actually use it for making financial decisions. The actual financial assets forecasting is much much more complicated.
hrm. another lagging indicator.
This dude clearly knows nothing about the topic he is teaching lol
Yet another LSTM stock prediction tutorial making the same min/max scaling mistake. Yawn.
/watch?v=lhrCz6t7rmQ