hai.. The **Predicted R²** of 0.3685 is not as close to the **Adjusted R²** of 0.7063 as one might normally expect; i.e. the difference is more than 0.2. This may indicate a large block effect or a possible problem with your model and/or data. Things to consider are model reduction, response transformation, outliers, etc. All empirical models should be tested by doing confirmation runs. The example above was from the software. the software stated if the difference is greater than 0.2, there is problem with data or etc. However, as u said, greater than 0.2 is better. I also found in one journal stated that greater is better. Can you explain regarding this matter? Thank you so much.
Hi! It looks like there's some confusion here. What the software is talking about is the *difference* between the Predicted and Adjusted R^2 values: you want them to agree with each other, so the difference of the values is less than 0.2. In other words, the Predicted R^2 in this example should be closer to the Adjusted R^2 value of 0.7063. This is not the same as looking at a single R^2 value, where you do want the value to simply be as high as possible.
When there are multiple good solutions, you have a robust process! Make sure to do confirmation runs to verify the results, and then choose settings that work well from a business standpoint.
STATEASE is the BEST. Everything made ease. From Kenya studying in Uganda.
Great video!
hai..
The **Predicted R²** of 0.3685 is not as close to the **Adjusted R²** of 0.7063 as one might normally expect; i.e. the difference is more than 0.2. This may indicate a large block effect or a possible problem with your model and/or data. Things to consider are model reduction, response transformation, outliers, etc. All empirical models should be tested by doing confirmation runs.
The example above was from the software. the software stated if the difference is greater than 0.2, there is problem with data or etc. However, as u said, greater than 0.2 is better. I also found in one journal stated that greater is better. Can you explain regarding this matter? Thank you so much.
Hi! It looks like there's some confusion here. What the software is talking about is the *difference* between the Predicted and Adjusted R^2 values: you want them to agree with each other, so the difference of the values is less than 0.2. In other words, the Predicted R^2 in this example should be closer to the Adjusted R^2 value of 0.7063.
This is not the same as looking at a single R^2 value, where you do want the value to simply be as high as possible.
Keep going...You really make us at Ease.
Thank you, we will!!
Great job Shari!
Thank you Dr. Lye!!
Sir/madam, I have got 5 solution with same diserability ,, how to choose best out of that
When there are multiple good solutions, you have a robust process! Make sure to do confirmation runs to verify the results, and then choose settings that work well from a business standpoint.