thank you thank you. This information is skipped out in most machine learning courses, and no one will teach you this. In practice, a lot of data has temporal nature, while all along you only learned how to classify cats and dogs, and regress house pricess.
*Abstract* This talk explores how to adapt machine learning models for time series forecasting by transforming time series data into tabular datasets with features and target variables. Kishan Manani discusses the advantages of using machine learning for forecasting, including its ability to handle complex data structures and incorporate exogenous variables. He then dives into the specifics of feature engineering for time series, covering topics like lag features, window features, and static features. The talk emphasizes the importance of avoiding data leakage and highlights the differences between machine learning workflows for classification/regression and forecasting tasks. Finally, Manani introduces useful libraries like Darts and sktime that facilitate time series forecasting with tabular data and provides practical examples. *Summary* *Why use machine learning for forecasting? (**1:25**)* - Machine learning models can learn across many related time series. - They can effectively incorporate exogenous variables. - They offer access to techniques like sample weights and custom loss functions. *Don't neglect simple baselines though! (**3:45**)* - Simple statistical models can be surprisingly effective. - Ensure the uplift from machine learning justifies the added complexity. *Forecasting with machine learning (**4:15**)* - Convert time series data into a table with features and a target variable. - Use past values of the target variable as features, ensuring no data leakage from the future. - Include features with known past and future values (e.g., marketing spend). - Handle features with only past values (e.g., weather) by using alternative forecasts or lagged versions. - Consider static features (metadata) to capture differences between groups of time series. *Multi-step forecasting (**8:07**)* - Direct forecasting: Train separate models for each forecast step. - Recursive forecasting: Train a one-step ahead model and use it repeatedly, plugging forecasts back into the target series. *Cross-validation: Tabular vs Time series (**11:32**)* - Randomly splitting data is inappropriate for time series due to temporal dependence. - Split data by time, replicating the forecasting process for accurate performance evaluation. *Machine learning workflow (**13:00**)* - Time series forecasting workflow differs significantly from classification/regression tasks. - Feature engineering and handling vary at predict time depending on the multi-step forecasting approach. *Feature engineering for time series forecasting (**14:47**)* - Lag features: Use past values of target and features, including seasonal lags. - Window features: Compute summary statistics (e.g., mean, standard deviation) over past windows. - Nested window features: Capture differences in various time scales. - Static features: Encode categorical metadata using target encoding, being mindful of potential target leakage. *Overview of some useful libraries (**27:01**)* - tsfresh: Creates numerous time series features from a data frame. - Darts and sktime: Facilitate forecasting with tabular data and offer functionalities like recursive forecasting and time series cross-validation. *Forecasting with tabular data using Darts (**28:04**)* - Example demonstrates forecasting with lag features and future known features on single and multiple time series. disclaimer: i used gemini 1.5 pro to summarize the youtube transcript.
I have a question. If I have a time series data for a market, and the data is from 2012 to 2022. now I need to forcast the number of customer that visit the store. But from 2020 to 2022 ,because of COVID19, the number of customer has drop a lot. for this case, If I use last 30% data(from 2019 to 2022) to testing. Model can't get any data that influences by COVID19 when model training (all of them use to test) Isn't that make forcast mape very high? how should I do for this case? (sorry for my poor english)
Awesome lecture! I just have one question @32:38, Kishan mentions that we may have different time indexes for different groups can be different which is fine. But the original consolidated data (all groups included) has continuous time stamps whereas when we consider different groups, there may be gaps in the time stamps. Would you still consider them as time series? Will the rest of the process work normally under these circumstances?
if you are imputing mean from your training set in place of a missing datapoint, does that mean that the imputed datapoint does not change your model estimation anyway as predicted model passes through mean of variables anyway? I dont think it is information leakage in this way, it is just saying ignore this datapoint
Great talk! How would account for availability in your model? For example let’s say a SKU was out of stock for a portion of the training period. This could result in the sale lag feature being low for the out of stock SKU and high for substitute SKUs that were in stock.
Very informative and intriguing talk. I've been using SARMIAX and things like fbprophet for time series forecast. I have a question about the value of the ML approach. Considering there is a host of things you need to account for while modeling a time-series problem as an ML problem, is it actually that significantly better than traditional algorithms? Is this production-grade stuff or is this in early experimental stages? I must admit the ML approach sounds way more interesting than what I've been doing for the past few years.
We basically use xgboost and lightGBMs for forecasting, or even linear regression. This models are therefore fit for production. ML models have the advantage that they allow you to enrich the features that you extract from the time series, with features from external resources, and hence, they are in general more versatile than the classical forecasting models like arima, which make many assumptions about the data and do not incorporate features very well.
Hi, does anyone know how to implement the recursive forecasting that he did in Darts using sktime. I couldn't really find an intuitive explanation online.
Is anybody ever compared model result using same dataset and same parameters from sktime and Darts? for example ARIMA model from both packages. I've try it, and both models gave a different MAPE result. I hope i have made a mistake in my code.
Thank you. This was 43 minutes very well spent.
thank you thank you. This information is skipped out in most machine learning courses, and no one will teach you this. In practice, a lot of data has temporal nature, while all along you only learned how to classify cats and dogs, and regress house pricess.
We do :)
Genius. He Makes python and time series almost easy to understand.
*Abstract*
This talk explores how to adapt machine learning models for time
series forecasting by transforming time series data into tabular
datasets with features and target variables. Kishan Manani discusses
the advantages of using machine learning for forecasting, including
its ability to handle complex data structures and incorporate
exogenous variables. He then dives into the specifics of feature
engineering for time series, covering topics like lag features, window
features, and static features. The talk emphasizes the importance of
avoiding data leakage and highlights the differences between machine
learning workflows for classification/regression and forecasting
tasks. Finally, Manani introduces useful libraries like Darts and
sktime that facilitate time series forecasting with tabular data and
provides practical examples.
*Summary*
*Why use machine learning for forecasting? (**1:25**)*
- Machine learning models can learn across many related time series.
- They can effectively incorporate exogenous variables.
- They offer access to techniques like sample weights and custom loss functions.
*Don't neglect simple baselines though! (**3:45**)*
- Simple statistical models can be surprisingly effective.
- Ensure the uplift from machine learning justifies the added complexity.
*Forecasting with machine learning (**4:15**)*
- Convert time series data into a table with features and a target variable.
- Use past values of the target variable as features, ensuring no data leakage from the future.
- Include features with known past and future values (e.g., marketing spend).
- Handle features with only past values (e.g., weather) by using alternative forecasts or lagged versions.
- Consider static features (metadata) to capture differences between groups of time series.
*Multi-step forecasting (**8:07**)*
- Direct forecasting: Train separate models for each forecast step.
- Recursive forecasting: Train a one-step ahead model and use it repeatedly, plugging forecasts back into the target series.
*Cross-validation: Tabular vs Time series (**11:32**)*
- Randomly splitting data is inappropriate for time series due to temporal dependence.
- Split data by time, replicating the forecasting process for accurate performance evaluation.
*Machine learning workflow (**13:00**)*
- Time series forecasting workflow differs significantly from classification/regression tasks.
- Feature engineering and handling vary at predict time depending on the multi-step forecasting approach.
*Feature engineering for time series forecasting (**14:47**)*
- Lag features: Use past values of target and features, including seasonal lags.
- Window features: Compute summary statistics (e.g., mean, standard deviation) over past windows.
- Nested window features: Capture differences in various time scales.
- Static features: Encode categorical metadata using target encoding, being mindful of potential target leakage.
*Overview of some useful libraries (**27:01**)*
- tsfresh: Creates numerous time series features from a data frame.
- Darts and sktime: Facilitate forecasting with tabular data and offer functionalities like recursive forecasting and time series cross-validation.
*Forecasting with tabular data using Darts (**28:04**)*
- Example demonstrates forecasting with lag features and future known features on single and multiple time series.
disclaimer: i used gemini 1.5 pro to summarize the youtube transcript.
This is by far one of the best wholesome videos on time series forecasting!!! loved it
The word wholesome doesn't mean what you think it means :) Did you mean comprehensive or extensive?
@@hp5072what does it mean
Amazing dump of knowledge, I have multiple times came back to this video
Very good talk. The presenter is a great teacher!
Solo un maestro explica en sencillo y de forma visual temas complejos! gracias! la mejor exposición de forecasting!
This is a truly useful session. Thank you for sharing the knowledge!
Excellent presentation. Great work Kishan
dude is a PhD for a reason, awesome stuff god damn
Great Presentation ! Interesting and clear
finally, someone can articulate this topic well...
Great talk hope will get more contents like that on Practical TS
Hi Am so grateful for this tutorial
I will checkout these libraries. Very informative, thanks
Really informative talk!
Great talk
this is some sysly good stuff!
Excellent talk!
Great work👍👍
Amazing! So easy to understand.
Fantastic!
I have a question. If I have a time series data for a market, and the data is from 2012 to 2022.
now I need to forcast the number of customer that visit the store.
But from 2020 to 2022 ,because of COVID19, the number of customer has drop a lot.
for this case, If I use last 30% data(from 2019 to 2022) to testing.
Model can't get any data that influences by COVID19 when model training (all of them use to test)
Isn't that make forcast mape very high? how should I do for this case? (sorry for my poor english)
Great presentation!
23:40 I don't really understand how it would cause data to leak in the train set? can anybody please explain with an example?
Awesome lecture! I just have one question @32:38, Kishan mentions that we may have different time indexes for different groups can be different which is fine. But the original consolidated data (all groups included) has continuous time stamps whereas when we consider different groups, there may be gaps in the time stamps. Would you still consider them as time series? Will the rest of the process work normally under these circumstances?
Great talk!
thoughts on using TFT model for multi time series forecasting
Super helpful presentation, thank you, will definitely be checking out your course!
Here is the link, just in case ;) www.trainindata.com/p/feature-engineering-for-forecasting
Thank you very much. Great talk
excellent and very informative presentation. Will definitely checkout darts and sktime
Great talk thanks
how is y_train_all defined in the last example?
if you are imputing mean from your training set in place of a missing datapoint, does that mean that the imputed datapoint does not change your model estimation anyway as predicted model passes through mean of variables anyway? I dont think it is information leakage in this way, it is just saying ignore this datapoint
Great talk! How would account for availability in your model? For example let’s say a SKU was out of stock for a portion of the training period. This could result in the sale lag feature being low for the out of stock SKU and high for substitute SKUs that were in stock.
you can create a dummy boolean variable feature.
Amazing
Very informative and intriguing talk.
I've been using SARMIAX and things like fbprophet for time series forecast.
I have a question about the value of the ML approach. Considering there is a host of things you need to account for while modeling a time-series problem as an ML problem, is it actually that significantly better than traditional algorithms? Is this production-grade stuff or is this in early experimental stages?
I must admit the ML approach sounds way more interesting than what I've been doing for the past few years.
*by ML models, I mean the tree based ML models here
We basically use xgboost and lightGBMs for forecasting, or even linear regression. This models are therefore fit for production. ML models have the advantage that they allow you to enrich the features that you extract from the time series, with features from external resources, and hence, they are in general more versatile than the classical forecasting models like arima, which make many assumptions about the data and do not incorporate features very well.
Hi, does anyone know how to implement the recursive forecasting that he did in Darts using sktime. I couldn't really find an intuitive explanation online.
Nice talk
Is anybody ever compared model result using same dataset and same parameters from sktime and Darts? for example ARIMA model from both packages.
I've try it, and both models gave a different MAPE result. I hope i have made a mistake in my code.
can we perform this with stock data with models such as Linear Regression ?
Yes you can!
this dude could be a voice actor
Enyone tried to apply this DART model on real world data? My MAPE score show me 26% ;-(
1:41 gente vê a gente vê a gente
Have you used Darts ever? From Darts I got "ValueError: `lags` must be strictly positive. Given: -1."