I have a question, does neural networks can be helpful or useful to reduce any noise/interference or overfitting on the final dataset? If so, what conclusion do you make in this case?
Please let us know how we can encourage you to do more videos like this......Thanks for the wonderful explanations. I really love your style of teaching.
The video was very informative but I have what might be a silly question, however I am new to ML. How can we interpret the predicition values? I have seen you during the entire video mentioning here and there "oh this is overfitting" or "there is no overfitting here". I would like to know how can we differentiate overfitting by looking at the prediction values. Unless I am missing something?
Amazing video! Do you currently have a video that talks different AI models ? Linear regression, neural networks etc? And when and why one should be used for a particular data set?
also 19:50 it can use another easy and auto way to split train_test from sklearn.model_selection import train_test_split X = data_new.drop(['median_house_value'], axis=1) y = data['median_house_value'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
Great video! I have a question though, how would you approach data preprocessing while making predictions on trained model with the new data (let’s say user input or streaming)?
Hi everyone! MSE and RMSE are too large, whether they are reasonable? All the predicting house price tasks I had done had the big MSE and RMSE like that.
Great explanation !! but i have a question , how can i know the correct split for train , test and validation , is it ok if test and val cross in some data ??
can we have a project tutorial on real-time facial emotion detector in python......... plz tell the different approaches/methods that we can take to complete this project.
Because we have to calculate the mse of training data not testing data Because we want to know how much accurate our model is by learning from training data
Take my courses at mlnow.ai/!
Thank you, Greg, for this amazing tutorial, helped me finish how to predict house prices in Melbourne.
I have a question, does neural networks can be helpful or useful to reduce any noise/interference or overfitting on the final dataset? If so, what conclusion do you make in this case?
Please let us know how we can encourage you to do more videos like this......Thanks for the wonderful explanations. I really love your style of teaching.
Thanks so much Alexander - which is my middle name by the way!
The video was very informative but I have what might be a silly question, however I am new to ML. How can we interpret the predicition values? I have seen you during the entire video mentioning here and there "oh this is overfitting" or "there is no overfitting here". I would like to know how can we differentiate overfitting by looking at the prediction values. Unless I am missing something?
You're the best! Brilliant and clear explanation!!!! Thank youuu!!!
Very welcome 😁😁
Amazing video! Do you currently have a video that talks different AI models ? Linear regression, neural networks etc? And when and why one should be used for a particular data set?
16:00 , i find a easy way to do:
median_house_value = 'median_house_value'
all_columns = list(data.columns)
all_columns.remove(median_house_value)
new_column_list = all_columns + [median_house_value]
data_new = data[new_column_list]
data_new
also 19:50 it can use another easy and auto way to split train_test
from sklearn.model_selection import train_test_split
X = data_new.drop(['median_house_value'], axis=1)
y = data['median_house_value']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
Thanks for the new video. Would using a PIPELINE be cleaner and recommended by you?
You're welcome! Pipelines are great. I've never personally used them, but I know how they work and they're pretty awesome.
Great video!
I have a question though, how would you approach data preprocessing while making predictions on trained model with the new data (let’s say user input or streaming)?
As long as you have some predefined processing function that doesn't depend on anything, you would just apply that to every input
Hi everyone! MSE and RMSE are too large, whether they are reasonable? All the predicting house price tasks I had done had the big MSE and RMSE like that.
Great explanation !! but i have a question , how can i know the correct split for train , test and validation , is it ok if test and val cross in some data ??
Roughly 70, 15, 15, at random
@39:00 the validate MSE is worse then the training. The model did not generalize but does this mean that's an overfitting issue.?
Yes, we overfit to the training set :)
That is good
It will really helpful for me
I'm very glad!
Why didn't you show us how to create the API and upload it?
Great project!!
Thank you ☺️
Could u make a video for roadmap of machine learning & deep learning with Coursera in 2023 plz❤️
I'd say my roadmap is still valid
can we have a project tutorial on real-time facial emotion detector in python......... plz tell the different approaches/methods that we can take to complete this project.
Okay this will be added to the to-do list!
On 39:14 why did you predict on the train data instead of the test data?
Because we have to calculate the mse of training data not testing data
Because we want to know how much accurate our model is by learning from training data
could you provide the notebook link?
Thank you very much for reminding me - it's there now.
I am doing this in google collab and and it keeps saying Import "tensorflow.keras..." could not be resol
Are you importing exactly the same way?
Could you please provide the dataset....
U are best
No u
@@GregHogg No U. Today I watched the full video 😃
@@__________________________6910 Oh great to hear!
@@GregHogg 😅