Great tutorial video! helped me to understand how pipeline in ML works, hope there will be more Kaggle competition walkthrough like this from you soon! :)
Good video, but: 1) What was a purpose of test set? You didn't use for your model estimation and you used cross-validation. 2) You shouldn't fit StandardScaler on Kaggle Test Set, but only transform on the same scaler you used for training data, because if features distributed a bit different, then scaling will be different and your model will get different numbers for exactly similar passenger. Would be nice if you pay attention to these details, because they are really important. But generally, video is nice and useful.
This is a great video, I’ve been trying to find a good place that would show the code behind creating a basic ML pipeline, or show some beginner feature engineering and whatnot, but I haven’t found anything as straightforward as this. A lot of other people have a lot of fluff in their tutorials, but you just show it straight up, which I really appreciate. Do you have any recommendations for textbooks/articles for a beginner wanting to get into Machine Learning? I have a strong math/programming background, so that’s not an issue, I just need something that will comprehensively explain all the main components of making an ML project. Thanks in advance and keep up the good work!
Thanks for the video. Could you deep dive into the current solutions and make a video on how to iterate over the current solution to get to a even better prediction accuracy?
Man your video was awesome. Easy to follow and replicate, plus you explain the key insights for those of us who have only a little knowledge of data analysis. Thanks a lot!
Hey, I see he addresses the Pearson correlation coeffecient issue later on where he uses One Hot Encoding to turn the data from ordinal to discrete. Is there a better way to visualize correlation even when you use this method? Or would doing the one hot encoding first and then doing the correlation heat map be best practise?
@@unfff doing one hot encoding and choosing the right correlation coefficient are two separate things. One hot encoding has nothing to do with correlation analysis. One hot encoding is just a transformation of a variable that can be used for multiple purposes.
to fix the fact corr() doesnt work with words, then you can do "df.corr(numeric_only=True)". where df is your data, and that will give the corr for your data but you do lose the non integer data coiumns.
@@Summer-of8zkYou are talking about a technical solution. What do you mean by if it doesn't work? Every statistical software will produce a correlation coefficient as long as your columns have some digits in it. I'm talking about what's theoretically (in)correct.
thankyou very much for this tutorial cuz i was like mentally down as i got 0.75 accuracy at my first try and also there were many people with 1.0 accuracy. so i was thinking why i can't. but now i understood the thing. thankyou soo much for this lesson.
I don't think there was a need for creating the AgeImputer class at least in the latest versions, probably using the SimpleImpute class directly is sufficient. But it's good learning tip on how to create a custom class.
Why are you scaling the variables when using a tree-based model? Scaling is done to Normalize data so that priority is not given to a particular feature. Scaling is mostly important in algorithms that are distance based and require Euclidean Distance. Random Forest is a tree-based model and hence does not require feature scaling.
I followed the code as said in the video and came across an error when we fit_transform with the strat_test_set. The error was that the 'Embarked' column was missing. I think it is because we drop it in featuredropper function, but in the pipeline as we process it all over again , I guess we get this error. Can you help me fix it asap???
Thank you... I have one question, why u pick this models ? On which KPI based you choice your models for any kinds of problems. That will be a very interesting for me
sparse matrix length is ambiguous; use getnnz() or shape[0] showing error message as shown above.(How to slove this) column_names = ["C", "S", "Q", "N"] ---> 13 for i in range(len(matrix.T)): 14 X[column_names[i]] = matrix.T[i]
sns.heatmap(titanic_data.corr(), cmap="YlGnBu") plt.show() This gives error: could not convert string to float: 'Braund, Mr. Owen Harris' shouldn't the titanic_data.corr() drop the string columns automatically?
Do sns.heatmap(titanic_data.corr(numeric_only=True),cmap="YlGnBu") instead of sns.heatmap(titanic_data.corr(),cmap="YlGnBu") in 11:50 as I assume it defaulted to True when this video was made and was later made not to. This is because that correlation function can't figure out the correlation between anything not quantitative so you have to tell the function to only look at numerical features.
I am new in the field of data science in terms of experience. I have completed paid skill course from IBM though. In my first attempt of this project which is my first project i got an accuracy of 78%. Is it good enough and should i move on to next project or try to refine my model for better accuracy. Please suggest someone with experience
Interesting where at 15:10 you said you don't want to look too much at your training set so you don't get biased. It seems everyone else I hear says to examine it as much as possible....is there something I'm misinterpreting from you or them?
I'm stuck in the following code: X_final_test = final_data X_final_test = X_final_test.fillna(method="ffill") scaler = StandardScaler() X_data_final_test = scaler.fit_transform(X_final_test) the message error: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. What should i do guys?
I am confused as to when I should use fit_transform and when I should use transform only. Previously, I understood that when you sing the former, you are calibrating, so to speak, to the estimator to a particular set of data, so that if you wanted to use that estimator subsequently and have it perform in the exact same way you should not refit it, but you should only use it with its transform method. In this video however you used fit transform every time and still got it to perform the same in every data set. Could you tell me a little bit about how that works?
Is there a difference between hit encoding in pandas and sklearn? The process is so much easier with pandas, is there a particular reason why he used sklearn?
The error indicates an issue with the FeatureEncoder class, specifically the transform method. The problem arises because the variable column_names is used before being defined in the second part of the transform method. try to execute this this code instead from sklearn.preprocessing import OneHotEncoder class FeatureEncoder(BaseEstimator, TransformerMixin): def fit(self, X, y=None): return self def transform(self, X): # Encode "Embarked" encoder = OneHotEncoder(handle_unknown="ignore") embarked_matrix = encoder.fit_transform(X[["Embarked"]]).toarray() embarked_column_names = ["C", "S", "Q", "N"] for i in range(len(embarked_matrix.T)): X[embarked_column_names[i]] = embarked_matrix.T[i] # Encode "Sex" sex_matrix = encoder.fit_transform(X[["Sex"]]).toarray() sex_column_names = ["Female", "Male"] for i in range(len(sex_matrix.T)): X[sex_column_names[i]] = sex_matrix.T[i] return X
Hello thanks for your this video , but strat_train_set = pipeline.fit(strat_train_Set) give attribute error that DataFrame object has no attribute "toarray"
I'm curious, how do I know the accuracy percentage inside the notebook, comparing the prediction with the dataset that we have, and not just uploading to kaggle.
I got this error ValueError: Input contains NaN after this line strat_train_set = pipeline.fit_transform(strat_train_set),I was following your tutorial.
@@yashp5341 I solved it, but I don't know if you get the same error, it kept emphasizing this: X[column_names[i]] = matrix.T(i), and it should look like this: X[column_names[i]] = matrix .T[i], I had to change the parentheses for this [ ], I hope it helps
No offence, but the generally accepted language of computer science is English. It would be hard to translate everything, and I am saying this as a non native speaker.
I'm actually tired of worrying about stocks. it's driving me nuts these days,I think crypto investment is far better than stock made over $39k in a week..
he is the best Broker, I have tried lots of professionals but got exceptional income trading with Dave Javens he is the best strategy now earning over $18,300 every 10 days...
The latest pandas version is not ignoring string values in the .corr function anymore. just add "numeric_only=True" and it will work again
thank you so much! i was looking how to resolve this issue
People who are dump like me , here what it means :) sns.heatmap(train_data.corr(numeric_only='True'), cmap='YlGnBu')
Thank you so much bro I was trying to solve this for 2 days continuously and nothing worked..🥹
thank you life saver!
import seaborn as sns
sns.heatmap(titanic_data.corr(numeric_only=True), cmap="YlGnBu")
plt.show()
I was pretty confused when I saw %100 accuracy lol, thanks for the explaining.
I knew it was cheating right away especially that the data contains the specific names of the people in the titanic
Great tutorial video! helped me to understand how pipeline in ML works, hope there will be more Kaggle competition walkthrough like this from you soon! :)
Good video, but: 1) What was a purpose of test set? You didn't use for your model estimation and you used cross-validation. 2) You shouldn't fit StandardScaler on Kaggle Test Set, but only transform on the same scaler you used for training data, because if features distributed a bit different, then scaling will be different and your model will get different numbers for exactly similar passenger. Would be nice if you pay attention to these details, because they are really important. But generally, video is nice and useful.
Got the same comment. Test set shouldn’t be fitted anymore but only transformed.
Do you know any yt channel solving the titanic dataset for reference?
@@jaysoncastillo2593 Did u find anything?
Nice "real life" example of the scikit pipeline. Helped me a lot, thanks.
This is a great video, I’ve been trying to find a good place that would show the code behind creating a basic ML pipeline, or show some beginner feature engineering and whatnot, but I haven’t found anything as straightforward as this. A lot of other people have a lot of fluff in their tutorials, but you just show it straight up, which I really appreciate. Do you have any recommendations for textbooks/articles for a beginner wanting to get into Machine Learning? I have a strong math/programming background, so that’s not an issue, I just need something that will comprehensively explain all the main components of making an ML project. Thanks in advance and keep up the good work!
Thanks for the video. Could you deep dive into the current solutions and make a video on how to iterate over the current solution to get to a even better prediction accuracy?
This is the best video i have ever watch on datascience and ml till date
This is actually such a good idea. A lot of python program / resume ideas are boring. Thanks!
Man your video was awesome. Easy to follow and replicate, plus you explain the key insights for those of us who have only a little knowledge of data analysis. Thanks a lot!
11:45 You can't use Pearson correlation coefficient for nominal/ordinal data.
12:49 you need to create dummy variables for each class.
Hey, I see he addresses the Pearson correlation coeffecient issue later on where he uses One Hot Encoding to turn the data from ordinal to discrete. Is there a better way to visualize correlation even when you use this method? Or would doing the one hot encoding first and then doing the correlation heat map be best practise?
@@unfff doing one hot encoding and choosing the right correlation coefficient are two separate things. One hot encoding has nothing to do with correlation analysis. One hot encoding is just a transformation of a variable that can be used for multiple purposes.
to fix the fact corr() doesnt work with words, then you can do "df.corr(numeric_only=True)". where df is your data, and that will give the corr for your data but you do lose the non integer data coiumns.
@@Summer-of8zkYou are talking about a technical solution. What do you mean by if it doesn't work? Every statistical software will produce a correlation coefficient as long as your columns have some digits in it. I'm talking about what's theoretically (in)correct.
thankyou very much for this tutorial cuz i was like mentally down as i got 0.75 accuracy at my first try and also there were many people with 1.0 accuracy. so i was thinking why i can't. but now i understood the thing. thankyou soo much for this lesson.
I don't think there was a need for creating the AgeImputer class at least in the latest versions, probably using the SimpleImpute class directly is sufficient. But it's good learning tip on how to create a custom class.
Why are you scaling the variables when using a tree-based model? Scaling is done to Normalize data so that priority is not given to a particular feature. Scaling is mostly important in algorithms that are distance based and require Euclidean Distance. Random Forest is a tree-based model and hence does not require feature scaling.
Thank you for great tutorial! Do you have more Kaggle competition walkthrough?
Thank u so much for providing this video helped me to understand a lot
I followed the code as said in the video and came across an error when we fit_transform with the strat_test_set. The error was that the 'Embarked' column was missing. I think it is because we drop it in featuredropper function, but in the pipeline as we process it all over again , I guess we get this error. Can you help me fix it asap???
I got the same error too
Me too
maybe that's because you run that part of code multiple times? I restart and run all the code, it works fine.
@@binglinjian2324please tell how to fix this 😢
I got the same error too
This is strange but, if you add the name length as a column it helps. The name length has 0.332350 correlation with the Survived column :)
Correlation is not causation. Very good example!
Great Video, thank you!
Thank you... I have one question, why u pick this models ? On which KPI based you choice your models for any kinds of problems. That will be a very interesting for me
sparse matrix length is ambiguous; use getnnz() or shape[0]
showing error message as shown above.(How to slove this)
column_names = ["C", "S", "Q", "N"]
---> 13 for i in range(len(matrix.T)):
14 X[column_names[i]] = matrix.T[i]
me too how to solve
sns.heatmap(titanic_data.corr(), cmap="YlGnBu")
plt.show()
This gives error: could not convert string to float: 'Braund, Mr. Owen Harris'
shouldn't the titanic_data.corr() drop the string columns automatically?
How did you solve this error?
Do sns.heatmap(titanic_data.corr(numeric_only=True),cmap="YlGnBu") instead of sns.heatmap(titanic_data.corr(),cmap="YlGnBu") in 11:50
as I assume it defaulted to True when this video was made and was later made not to. This is because that correlation function can't figure out the correlation between anything not quantitative so you have to tell the function to only look at numerical features.
@@unfff tnx bro... it helped
yes this same error exist to me also
@@unffftysm 🥰
I just used logistic regression and got 0.7655 taking only gender & Pclass. Thanks for your clarification about 100% accuracy though.
I am new in the field of data science in terms of experience. I have completed paid skill course from IBM though. In my first attempt of this project which is my first project i got an accuracy of 78%. Is it good enough and should i move on to next project or try to refine my model for better accuracy. Please suggest someone with experience
had a problem here 42:05
I solved only selecting numeric:
X_test_numeric = X_test.select_dtypes(include=[np.number])
bro how did you solved the problem which is in timeline 32:00
🙄
can you help me with you code that you solved
@@SaurabhSah-x7w Yeah sure, i dont remember right now, but i will check my code tomorrow and write you back
Can you make a tutorial on an AI that plays a game using the NEAT module in python and pygame???
Thanks, it is a great tutorial
Interesting where at 15:10 you said you don't want to look too much at your training set so you don't get biased. It seems everyone else I hear says to examine it as much as possible....is there something I'm misinterpreting from you or them?
He said testing dataset not the training dataset.
I'm stuck in the following code:
X_final_test = final_data
X_final_test = X_final_test.fillna(method="ffill")
scaler = StandardScaler()
X_data_final_test = scaler.fit_transform(X_final_test)
the message error: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead.
What should i do guys?
I am confused as to when I should use fit_transform and when I should use transform only. Previously, I understood that when you sing the former, you are calibrating, so to speak, to the estimator to a particular set of data, so that if you wanted to use that estimator subsequently and have it perform in the exact same way you should not refit it, but you should only use it with its transform method. In this video however you used fit transform every time and still got it to perform the same in every data set. Could you tell me a little bit about how that works?
Correlation doesn't work for string values hw u did it ? 🤔
Same problem
@@Dan-mm9yd numeric_only=True
Is there a difference between hit encoding in pandas and sklearn? The process is so much easier with pandas, is there a particular reason why he used sklearn?
at 32:00, how is he calling stat_train_set in the pipeline.fit_transform function when the variable doesnt exist yet?
Did u find the answer?😬
@@90cijdixke Did you find yet ?
Am i the only one Stuck at 32:31. i keep getting this error: AttributeError: 'FeatureEncoder' object has no attribute 'transform'
could you solve this?
@@aidaosmonova4798 hi could you solve it?
Please tell how to fix this
The error indicates an issue with the FeatureEncoder class, specifically the transform method. The problem arises because the variable column_names is used before being defined in the second part of the transform method.
try to execute this this code instead
from sklearn.preprocessing import OneHotEncoder
class FeatureEncoder(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
# Encode "Embarked"
encoder = OneHotEncoder(handle_unknown="ignore")
embarked_matrix = encoder.fit_transform(X[["Embarked"]]).toarray()
embarked_column_names = ["C", "S", "Q", "N"]
for i in range(len(embarked_matrix.T)):
X[embarked_column_names[i]] = embarked_matrix.T[i]
# Encode "Sex"
sex_matrix = encoder.fit_transform(X[["Sex"]]).toarray()
sex_column_names = ["Female", "Male"]
for i in range(len(sex_matrix.T)):
X[sex_column_names[i]] = sex_matrix.T[i]
return X
Sir my updated sklearn version doesn't have fit_transform.. Please guide what should I do!
Is there a video in which you have a deep explanation of how to understand 'Class' __init__ and everything related to this methods?
Yes u search for OOP in python
Why did you fit your pipeline on the test.csv data
Hello thanks for your this video , but strat_train_set = pipeline.fit(strat_train_Set) give attribute error that DataFrame object has no attribute "toarray"
How to fix this please tell
@@jeeaspirant7890 I can't fix it
Thank you for you teach video, it is very good for noob
Awesome bro
Does anyone know how to do MSE error for this dataset?
A little bit fast(especially typing xD), but good tutorial; I got 79,42%, thanks!
I'm curious, how do I know the accuracy percentage inside the notebook, comparing the prediction with the dataset that we have, and not just uploading to kaggle.
Can we download your jupyter notebook from somewher?
I just submitted mine today and I got a score of 0.78229 but then I saw all those 1s and I was like "just how did they do that"😂
Thanks, very interesntin video, new susbcribe.
The Embarked column in the test set has no N value and I am not able to use your pipeline code because of it. Is there a way to overcome this?
Ok got it, I didn't write error="ignore' in Feature Dropper section.
I got this error ValueError: Input contains NaN after this line strat_train_set = pipeline.fit_transform(strat_train_set),I was following your tutorial.
I got the same error, did you perhaps get the answer?
@@yashp5341 I solved it, but I don't know if you get the same error, it kept emphasizing this: X[column_names[i]] = matrix.T(i), and it should look like this: X[column_names[i]] = matrix .T[i], I had to change the parentheses for this [ ], I hope it helps
i am having strat data error after that everywhere its an error anyone can explain why
Nice video.
Cn't import BaseEstimator, anyone can help?
nice
I see they probably cheating I lost confidence when I say some 100% while I only got 0.76 which I think is not bad
Give the notebook
decent vid
lol
Even I laughed at the title.
Lol
Very interesting. But please translate your video in Russian
No offence, but the generally accepted language of computer science is English. It would be hard to translate everything, and I am saying this as a non native speaker.
@@quasii7 а, ну ладно
I am also Russian, but all computer science literature etc is mostly in English, so better to get used to it.
I'm actually tired of worrying about stocks. it's driving me nuts these days,I think crypto investment is far better than stock made over $39k in a week..
oops that's a huge lost.
That's a good idea,but how do I get an experienced trader? I don't know anyone sorry to bother you mate do you have any that I could work with?
He'll help you recover your money. But must take caution, On the broker you invest with.
he is the best Broker, I have tried lots of professionals but got exceptional income trading with Dave Javens he is the best strategy now earning over $18,300 every 10 days...
To me it is, been working with him for a year and four months. And I have been getting my profits seems legit to me️