i love this vid, i have watched 2 this kind of vid on ur channel, one humble request, can you please make more these kind of vid, because this is really helpful & important for beginners like me. Much love
Hi, thanks for the video. One question, is it possible that replacing all the missing values with mean is affecting the accuracy? As Insulin has a lot of 0 values and it is the main feature which can affect our final response in diabetes prediction. Is there a better way to impute the value so that it is more uniformly distributed
I have a doubt, there is the independent variable named skin in his video but the actual dataset does not contain that independent variable, why is that? Am I looking at the wrong dataset?
Brother how to write a research paper? Please help me. If I do any project that is shown in your videos then I write a article, will it be accepted as a conference paper?
how to impute missing values by using multiple imputation through chained equation.because of missing values in insulin and in some attributes it affects the accuracy of the model
Hi I am not able to import Imputer from sklearn preprocessing I am getting the below error: ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (C:\Users\Vijaya\anaconda\lib\site-packages\sklearn\preprocessing\__init__.py) Please do help.
Hi sir I am Likhitha currently 11 years I am getting a error that says "No module named 'xgboost'" please do help me out to solve this error thank you Likhitha
Why you choosed random forest algorithm for this?. Actually i am new to this so i want to know reason of using this algorithm as compared to other algorithm
Thanks for the video. But your precision and recall are so high which is not good for this kind of analyses (because we don't want to wrongly predict a disease to be diabetic and also we don't want to miss any patient with diabetes. The precision is somewhat acceptable not low recall). Do you an intuition of how you can improve those and overall accuracy? Maybe some feature selection and feature engineering? I want to know more thoughts on it. Maybe a follow-up video on this?
I think more ellaborated feature engineering would do. I find featuretool package, for automated features engineering to be the best. I have had amazing results so far at deployment stage.
bcz when we test the accuracy using test data set, some of types of test data dosen't include in train dataset so model doesn't train or aware of that data so that's why it give different accuracy, for fix it we have another method called KFold Cross validation check that video
Hi Krish. I enjoy your videos. I just wonder why you always use correlation to select your features. Yet, correlation only picks up the linear dependency. What if you use Wrapper methods or embedded methods? Thanks.
Neeku dandalu dora! You're totally awesome man!
Why didnt you explain Hyper parameter tuning code? I guess most of us didnt understood in params section.
parameters are basically mathematical stuff so it was already explained in particular algorithm's video check it
@@jaysoni7812 hey, what is the point of calculation of correlation of features..? Here, we even didn't do dimensions reduction?
i love this vid, i have watched 2 this kind of vid on ur channel,
one humble request, can you please make more these kind of vid, because this is really helpful & important for beginners like me.
Much love
Why didn't you look at the missigness correlations when filling in the missing values?
Hi, thanks for the video. One question, is it possible that replacing all the missing values with mean is affecting the accuracy? As Insulin has a lot of 0 values and it is the main feature which can affect our final response in diabetes prediction. Is there a better way to impute the value so that it is more uniformly distributed
Ya good question, i have the same question, help me with the solution
I would suggest you to try it with KNN once. I tried KNN with the same dataset and achieved an accuracy of 81+
Can u share me the code with knn please
what is the point of calculation of correlation of features..? Here, we even didn't do dimensions reduction?
Bro..here we are getting accuracy as a output..but..where is diabetes prediction
Do you think accuracy of 77% is a good score ? Or there is potential to get better results?
I worked on the same dataset and I have an accuracy of 81%
I have a doubt, there is the independent variable named skin in his video but the actual dataset does not contain that independent variable, why is that? Am I looking at the wrong dataset?
Why are you using mean imputation strategy? Any idea to check pattern in missing data
could you please teach about Hadoop map reduce k-means clustering (H-KC)
Hi there, It seems that value 0 in num_preg has also been replaced by mean. Value of num_preg can be 0 in the data.
Could you please clarify?
do you got the ans ? let me know if you
which algorithm did we use in this video ? decision tree ? SVM ?etc.
Random forest and xgboost
Nice one. Your pointers are very useful. Thank you
Requesting to create few more projects in Healthcare domain.
Brother how to write a research paper? Please help me. If I do any project that is shown in your videos then I write a article, will it be accepted as a conference paper?
how to impute missing values by using multiple imputation through chained equation.because of missing values in insulin and in some attributes it affects the accuracy of the model
Thanks Krish, I am trying to do my dissertation in healthcare analytics, wanted to know if you have done anything in quality care mining?
Pls mention the significance of correlation of features found using heatmap matrix. Also how did u reduce the feature from 10 to 8 and why?
The rest two features were categorical whose correlation can't be found with numerical values
Hi
I am not able to import Imputer from sklearn preprocessing
I am getting the below error:
ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (C:\Users\Vijaya\anaconda\lib\site-packages\sklearn\preprocessing\__init__.py)
Please do help.
use this
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
Hi sir I am Likhitha
currently 11 years
I am getting a error that says "No module named 'xgboost'" please do help me out to solve this error
thank you
Likhitha
install package xgboost
pip install xgboost
I have my own data set with approx 15 parameters , and want to write a paper but not aware with data science , please suggest ....
Why you choosed random forest algorithm for this?. Actually i am new to this so i want to know reason of using this algorithm as compared to other algorithm
Pls share a project detail on insurance fraud claim analysis ... end to end
Sure...probably the next video I will upload will be in that topic.
your content is osm .....but pls explain it more clear at some point it is difficult to understand .....Thank you :)
Sir can our give any reason why we have used xgboost algo for improving accuracy why not another algo
Thanks for the video. But your precision and recall are so high which is not good for this kind of analyses (because we don't want to wrongly predict a disease to be diabetic and also we don't want to miss any patient with diabetes. The precision is somewhat acceptable not low recall). Do you an intuition of how you can improve those and overall accuracy? Maybe some feature selection and feature engineering? I want to know more thoughts on it. Maybe a follow-up video on this?
I think more ellaborated feature engineering would do. I find featuretool package, for automated features engineering to be the best. I have had amazing results so far at deployment stage.
@@noecareme8242 Great. Will try it out
what is the point of calculation of correlation of features..? Here, we even didn't do dimensions reduction?
Sir why did u took only pregnancies patient only u could have taken others also
You have a great communication skills
Can you give any tips how to develop.. Please🙏🙏🙏
How will you handle the missing data if it is present?
Can you make one video related to handle a high cardinality in a feature
4:17 I think if the label or class is in the form of a number it is called regression. is that so sir?
no its not true
@@shervintheprodigy6402 why not sir?
What is the purpose of this?......is this regression you are doing on python??
How uci diabetic dataset is converted into attribute wise dataset
after the imputation step, how do I see the mean value which is replacing all 0s in x_train ,how to see that?
Can you say which algorithm u used !pls
I used logistic regression for this I got an accuracy:0.76663 and it does not take time for fitting
Lmao
Can i use it for my project? (Probability Paper)
how to differentiate between diabetes type from pima india dataset?
@Krish Naik Which ML algorithms you use in this project
random forest classifier
XGBoost
why does it give a different accuracy value when the data set is same
bcz when we test the accuracy using test data set, some of types of test data dosen't include in train dataset so model doesn't train or aware of that data so that's why it give different accuracy, for fix it we have another method called KFold Cross validation check that video
Ye project banake milega kya.....
Hi Krish. I enjoy your videos. I just wonder why you always use correlation to select your features. Yet, correlation only picks up the linear dependency. What if you use Wrapper methods or embedded methods? Thanks.
How to find diabetes prediction function?
Null values are in the set
Can it be done by svm
if i want to take the input manualy to predict inplace of X_test how can i do that?
Just pass an array of your preferred values in place of x-test
Thanks Buddy
Thanks.nice one
Hai sir please make video on lda topic modelling
Hii how to predict
Prediction of onset of diabetes using machine learning algorithms
good job
Is their anyone who import SimpleImputer instead of Imputer?
Can you solve my one problem
Provide the code