You prolly dont care at all but does anybody know of a tool to get back into an instagram account?? I was dumb lost the password. I appreciate any tips you can give me
@Miller Konnor thanks so much for your reply. I found the site thru google and im in the hacking process atm. Looks like it's gonna take a while so I will get back to you later when my account password hopefully is recovered.
My guess is that the idea is when you have many parameters for a large amount of data, the parameters can become over-fitted for that particular data-set and less suitable for either new additions to the data or for other data sets. Thus, these parameters help define a model that works well for the data in question and more. I could be wrong though.
When the dataset is large, more information is available, and the model has a greater capacity to fit the data closely. Penalizing for the dataset size helps prevent models from becoming overly complex (so much that it is only good at predicting the training ones and not the unseen test cases) and overfitting the data.
Very good explanation. I think it will be easier to follow if some symbols are being written along with the speech.
You prolly dont care at all but does anybody know of a tool to get back into an instagram account??
I was dumb lost the password. I appreciate any tips you can give me
@Reuben Marcellus instablaster :)
@Miller Konnor thanks so much for your reply. I found the site thru google and im in the hacking process atm.
Looks like it's gonna take a while so I will get back to you later when my account password hopefully is recovered.
@Miller Konnor It worked and I now got access to my account again. I am so happy:D
Thanks so much, you saved my account :D
@Reuben Marcellus you are welcome xD
What video should follow this one?
this really helped me understand these concepts, great job and thank you!
This is a good video but I wish I could tell which video was next.
Very clear and straight to the point , i like it
They took my boy's soul 😩
SOOOOOOOOOOOOOOOOOOOOOOOOO WELL EXPLAINED THANKS
what is the problem if the AIC & BIC are independent?
Thanks for your understandable explanation!
Is AIC suitable for non-nested models?
Why do we penalize the model for having more observations in BIC?
My guess is that the idea is when you have many parameters for a large amount of data, the parameters can become over-fitted for that particular data-set and less suitable for either new additions to the data or for other data sets. Thus, these parameters help define a model that works well for the data in question and more. I could be wrong though.
I dont think you want to penalize that.
When the dataset is large, more information is available, and the model has a greater capacity to fit the data closely. Penalizing for the dataset size helps prevent models from becoming overly complex (so much that it is only good at predicting the training ones and not the unseen test cases) and overfitting the data.
Why does the penalty for BIC scale with the amount of data? Doesn't testing on a large dataset reduce the likelihood of overfitting?
This is a great video. I'm not sure who would dislike this... perhaps novices expecting something much more basic.
Onya mate
neat...
Bozdogan CAIC is better!
too fast
Is this guy trying to look cool?
I don't know, but he scares me xD