James Oliver.. Thanks for the reply James.. Data blending on two reports, joint child attribute, how to deal with ragged hierarchies, split hierarchies in mstr, etc.
Nice video. Two questions: 1. What is the difference between the Training Metric that you created, and the predictive metric that Microstrategy created itself? 2. How can you set the time limit for prediction? I.e for how much time in future the algorithm will predict values?
Good question. So the training metric is what is used to create the predictive metric. In other words when you add the training metric to a report and run it, the data in that report will build the model that the predictive metric will ultimately use to make predictions on other reports. The training metric and the report it is on creates a model and the predictive metric uses that model. So if you were training a metric to predict housing prices, you would run the training metric against a large historical dataset of actual housing prices. Then the predictive metric can be used to "predict" future housing prices based on what it has seen during training.
There is a conceptual error... You need to add more observation in the training report otherwise you have 1 observation for each "beta"... By adding the atribute day will fetch more observation to the training algorithm and produce better predictor
Hello James... Great work. Kindly make more videos on advanced topics and dashboards. Thank you so much.
Thank you! I plan to make a video on dashboards soon. Did you have any other advanced topics in mind? I'm always looking for new ideas.
James Oliver.. Thanks for the reply James.. Data blending on two reports, joint child attribute, how to deal with ragged hierarchies, split hierarchies in mstr, etc.
Thanks for the great suggestions! I can definitely cover some of these topics.
James Oliver... Thanks a lot James.
Nice video. Two questions:
1. What is the difference between the Training Metric that you created, and the predictive metric that Microstrategy created itself?
2. How can you set the time limit for prediction? I.e for how much time in future the algorithm will predict values?
Good question. So the training metric is what is used to create the predictive metric. In other words when you add the training metric to a report and run it, the data in that report will build the model that the predictive metric will ultimately use to make predictions on other reports. The training metric and the report it is on creates a model and the predictive metric uses that model. So if you were training a metric to predict housing prices, you would run the training metric against a large historical dataset of actual housing prices. Then the predictive metric can be used to "predict" future housing prices based on what it has seen during training.
@@JamesOliver thanks for your response...keep the useful videos coming :)
There is a conceptual error... You need to add more observation in the training report otherwise you have 1 observation for each "beta"...
By adding the atribute day will fetch more observation to the training algorithm and produce better predictor
Please post video for fact extension/degradation
That is a great idea! I will definitely add that to my list of videos to do.