Exploratory Data Analysis (EDA) and Feature Engineering are two essential steps in data science projects that help in understanding the data, extracting valuable insights, and preparing the data for model building and analysis. Exploratory Data Analysis (EDA): EDA is the initial and crucial phase of any data science project. It involves exploring and summarizing the main characteristics of the dataset to gain insights into its structure, patterns, and relationships between variables. The main objectives of EDA are as follows: Data Cleaning: Identifying and handling missing or erroneous data points, dealing with outliers, and removing duplicates. Descriptive Statistics: Calculating basic statistical measures such as mean, median, standard deviation, and percentiles to understand the central tendencies and dispersion of the data. Data Visualization: Creating visual representations like histograms, scatter plots, box plots, and heatmaps to visualize the distribution and relationships between variables. Correlation Analysis: Assessing the correlation between different features to understand their interdependencies and potential multicollinearity. Hypothesis Testing: Conducting statistical tests to validate assumptions and make data-driven decisions. EDA helps data scientists to identify patterns, trends, and potential issues within the dataset. It provides a foundation for further analysis and model building. Feature Engineering: Feature engineering involves transforming the raw data into meaningful features that can be used as inputs for machine learning algorithms. The quality and relevance of features play a significant role in the performance of a predictive model. The key steps in feature engineering are as follows: Feature Selection: Choosing the most relevant features that have a significant impact on the target variable while disregarding irrelevant or redundant ones. This step helps in reducing dimensionality and enhancing model efficiency. Feature Transformation: Applying mathematical or statistical transformations to the features to make the data suitable for modeling. Common transformations include scaling, normalization, and log transformations. Handling Categorical Variables: Converting categorical variables into numerical representations using techniques like one-hot encoding or label encoding to make them usable by machine learning algorithms. Creating Interaction Features: Introducing new features based on interactions between existing features can help capture non-linear relationships. Handling Missing Data: Dealing with missing data by imputing or removing missing values, depending on the nature of the dataset. Feature Extraction: Generating new features from the existing data using domain knowledge or advanced techniques like principal component analysis (PCA) or t-distributed stochastic neighbor embedding (t-SNE). Effective feature engineering can significantly improve the performance of machine learning models by providing them with more relevant and informative inputs, leading to more accurate predictions and better insights. In summary, Exploratory Data Analysis (EDA) helps in understanding the data, identifying patterns, and making data-driven decisions. Feature engineering transforms the data into useful features, enabling machine learning models to learn from the data and make predictions effectively. Together, these two steps are fundamental for successful data science projects.
Hi Krishna sir, I got new job on data science domain at Chennai product based company. Your videos lots help me before I was working different domain. Best Regards, Balaji
1. Feature Engineering (Takes 30% of Project Time) a) EDA i) Analyze how many numerical features are present using histogram, pdf with seaborn, matplotlib. ii) Analyze how many categorical features are present. Is multiple categories present for each feature? iii) Missing Values (Visualize all these graphs) iv) Outliers - Boxplot v) Cleaning
b) Handling the Missing Values i) Mean/Median/Mode
c) Handling Imbalanced dataset d) Treating the Outliers e) Scaling down the data - Standardization, Normalization f) Converting the categorical features into numerical features 2. Feature Selection a) Correlation b) KNeighbors c) ChiSquare d) Genetic Algorithm e) Feature Importance - Extra Tree Classifiers
3. Model Creation 4. Hyperparameter Tuning 5. Model Deployment 6. Incremental Learning
What I actually need you know very well sir but how ??man ki baat jan lete ho ap antaryami ho mahagyani ho balki me to kahta hu ap purush he nahi MahaPurus ho🤩😍😍❤❤❤
Numerical features may be there, categorial features, missing values, visualise, outliers box plot, cleaning Step 2 handling missing values by mean, box plot iqr remove, handling imbalance dataset, treating outliers, scaling data standarisation and normalisation, categorical to numerical features
Thanks Krish for the video I am about to start my first ever project as an intern and this helped me in an very deep way . Thank you 🙂 . If you give me any suggestions that would be very helpful for me .
Grt list of videos for EDA. In case we have more categorical variables and less numerical variables. Post EDA, should we work on Chaid algorithm. Please suggest. Thanks
@krish Naik, I am following your channel from the early days. I have a question, How to use information extracted from EDA? e.g by plotting a CDF graph, I can say that 70 % of people are below the age of 50. But the question is, where this information is used in the project?
Guys I have doubt, can anyone help. For scaling data: we have numerical column and categorical column are encoded in to numerical. So scaling need to done only on numerical column or on encoded column as well.
Sir but, before doing EDA we can also split the data first, so that the test data can be completely isolated and don't have any idea about the training one. And then we can perform EDA on training data and further transform the test data. Is this a good practice? or do we perform EDA for complete data?
In theory you can create the training/test split at any point of the "pipeline". Generally you are sampling data points based on some distribution, or at random, and classifying those records as training/testing. That being said, you want the same transformations applied to the training and testing so you can apply one inverse function to revert these transformations. For example, if you are doing MinMax scaler, if you apply this after splitting then the inverse to undo the normalization will be different for each since the min/max for each dataset is different. So idealy you apply feature engineering on the dataset as a whole before splitting.
Sir I am doing MSc integrated in data science(BSC+MSc) in Goa, so in 5th semester they will teach us machine learning so should I do MLDL from ineuron ?? And can u suggest course which will be plus point for my career
If u are planning for job in AI or ML , then go for AppliedAI course.. if u are learning for your knowledge , u can consider Krish sir playlist or courses from Ineuron..
sir , for data columns which had more no. of zeros , we have to replace by mean,meadian, in numerical column. should we consider those zeros as missing values . for my data set belongs to timerseries which hads spends vs sales columns in different week level .i saw a column, spends in one channel is having too many zeros, what to do in this case?
Exploratory Data Analysis (EDA) and Feature Engineering are two essential steps in data science projects that help in understanding the data, extracting valuable insights, and preparing the data for model building and analysis.
Exploratory Data Analysis (EDA):
EDA is the initial and crucial phase of any data science project. It involves exploring and summarizing the main characteristics of the dataset to gain insights into its structure, patterns, and relationships between variables. The main objectives of EDA are as follows:
Data Cleaning: Identifying and handling missing or erroneous data points, dealing with outliers, and removing duplicates.
Descriptive Statistics: Calculating basic statistical measures such as mean, median, standard deviation, and percentiles to understand the central tendencies and dispersion of the data.
Data Visualization: Creating visual representations like histograms, scatter plots, box plots, and heatmaps to visualize the distribution and relationships between variables.
Correlation Analysis: Assessing the correlation between different features to understand their interdependencies and potential multicollinearity.
Hypothesis Testing: Conducting statistical tests to validate assumptions and make data-driven decisions.
EDA helps data scientists to identify patterns, trends, and potential issues within the dataset. It provides a foundation for further analysis and model building.
Feature Engineering:
Feature engineering involves transforming the raw data into meaningful features that can be used as inputs for machine learning algorithms. The quality and relevance of features play a significant role in the performance of a predictive model. The key steps in feature engineering are as follows:
Feature Selection: Choosing the most relevant features that have a significant impact on the target variable while disregarding irrelevant or redundant ones. This step helps in reducing dimensionality and enhancing model efficiency.
Feature Transformation: Applying mathematical or statistical transformations to the features to make the data suitable for modeling. Common transformations include scaling, normalization, and log transformations.
Handling Categorical Variables: Converting categorical variables into numerical representations using techniques like one-hot encoding or label encoding to make them usable by machine learning algorithms.
Creating Interaction Features: Introducing new features based on interactions between existing features can help capture non-linear relationships.
Handling Missing Data: Dealing with missing data by imputing or removing missing values, depending on the nature of the dataset.
Feature Extraction: Generating new features from the existing data using domain knowledge or advanced techniques like principal component analysis (PCA) or t-distributed stochastic neighbor embedding (t-SNE).
Effective feature engineering can significantly improve the performance of machine learning models by providing them with more relevant and informative inputs, leading to more accurate predictions and better insights.
In summary, Exploratory Data Analysis (EDA) helps in understanding the data, identifying patterns, and making data-driven decisions. Feature engineering transforms the data into useful features, enabling machine learning models to learn from the data and make predictions effectively. Together, these two steps are fundamental for successful data science projects.
Thank you so much
Thank you for proding this meaningful description.
💪🤣Facial expression is serious when he said he goes with Box Plots to find the outliers. Gotta love the passion bro.
Krish Sir You Know Your Channel Is Not Only A RUclips Channel ... It Is Everything For Us !
Having A Mentor And Teacher Like You Is A Blessing
I have been watching your videos non stop for weeks now, by God, you are my favorite tutor...God bless
This guy deserves a million subs 🌸❤️
I am from future and he has million subs
Hi Krishna sir,
I got new job on data science domain at Chennai product based company. Your videos lots help me before I was working different domain.
Best Regards,
Balaji
Congratulations
Induction session is awesome from MLDL course. .that's 🔥🔥🔥
a lot of love and appreciation from Pakistan for your great effort.
1. Feature Engineering (Takes 30% of Project Time)
a) EDA
i) Analyze how many numerical features are present using histogram, pdf with seaborn, matplotlib.
ii) Analyze how many categorical features are present. Is multiple categories present for each feature?
iii) Missing Values (Visualize all these graphs)
iv) Outliers - Boxplot
v) Cleaning
b) Handling the Missing Values
i) Mean/Median/Mode
c) Handling Imbalanced dataset
d) Treating the Outliers
e) Scaling down the data - Standardization, Normalization
f) Converting the categorical features into numerical features
2. Feature Selection
a) Correlation
b) KNeighbors
c) ChiSquare
d) Genetic Algorithm
e) Feature Importance - Extra Tree Classifiers
3. Model Creation
4. Hyperparameter Tuning
5. Model Deployment
6. Incremental Learning
Thank you so much!
Thank you
Thanks. You saved my 5 minutes.
thnx a lot Ma'am🙏🙏
thanks
really appreciating
Top priority for Aspiring Data Scientists like me
This is clear info about F.E and E.D.A. . 🙏🙏
Thank you so much for helping us this way ....🎉🎉🎉🎉 Thank you so much sir
You are a very knowledgeable and helping natured person 🎉🎉🎉🎉🎉
sir one more video on eda all steps and implementation with dataset
What I actually need you know very well sir but how ??man ki baat jan lete ho ap antaryami ho mahagyani ho balki me to kahta hu ap purush he nahi MahaPurus ho🤩😍😍❤❤❤
Can we have a video on a real time project with all the necessary steps krish??
Thank you..much needed 🙂
Numerical features may be there, categorial features, missing values, visualise, outliers box plot, cleaning
Step 2 handling missing values by mean, box plot iqr remove, handling imbalance dataset, treating outliers, scaling data standarisation and normalisation, categorical to numerical features
Thanks Krish for the video I am about to start my first ever project as an intern and this helped me in an very deep way . Thank you 🙂 . If you give me any suggestions that would be very helpful for me .
Plz let us know your experience after 3 months of internship
that expression and sound at 4:30..🤣🤣
Great Work sir jii ! 👌👌👌👌
Is data cleaning the part of features engineering?
One doubt, can we scale categorial lables even before encoding?? Is that possible ?
Thank you Krish!!!!!!!
all of these things which you shows in video.. is it available on your feature playlist??..with complete guidense!
yes sir
@@krishnaik06 I need your help
sir please teach us ml and dl also...ur teaching way is very good
Very helpful channel😁
Sir one video for
Steps for model training
Thank you…great video
Tx a lot u did awesome 🥰❤️
Thank you for this video sir
Very important step
Grt list of videos for EDA. In case we have more categorical variables and less numerical variables. Post EDA, should we work on Chaid algorithm. Please suggest. Thanks
which pentab are you using
Sir, Need video for feature extraction with example.
Does EDA and FE serve same purpose?
great video sir
please make a project on sign language recognition
@krish Naik, I am following your channel from the early days. I have a question, How to use information extracted from EDA? e.g by plotting a CDF graph, I can say that 70 % of people are below the age of 50. But the question is, where this information is used in the project?
Great sir
I have a grade column which contains values in percentage and cgpa mix ...how to convert all the data into percentage... A sample code will be helpful
Thank you, sir.
Guys I have doubt, can anyone help.
For scaling data: we have numerical column and categorical column are encoded in to numerical. So scaling need to done only on numerical column or on encoded column as well.
Sir data structure and algorithm is used in data science
ruclips.net/video/ND3HXC46zO4/видео.html
This video of kris will answer your question.
How to handle missing values in NLP like review and feedback not category features
just drop
"udush channel" - 0:02😂
Sir but, before doing EDA we can also split the data first, so that the test data can be completely isolated and don't have any idea about the training one. And then we can perform EDA on training data and further transform the test data. Is this a good practice? or do we perform EDA for complete data?
In theory you can create the training/test split at any point of the "pipeline". Generally you are sampling data points based on some distribution, or at random, and classifying those records as training/testing. That being said, you want the same transformations applied to the training and testing so you can apply one inverse function to revert these transformations. For example, if you are doing MinMax scaler, if you apply this after splitting then the inverse to undo the normalization will be different for each since the min/max for each dataset is different. So idealy you apply feature engineering on the dataset as a whole before splitting.
great video
SIR CAN YOU SHOW THIS BY USING AN EXAMPLE STEP BY STEP
Can you make a detailed hyperparameter tuning?
He did , i think so
in some cases data collection is first
The telegram link is broken
u r awesome
Sir I am doing MSc integrated in data science(BSC+MSc) in Goa, so in 5th semester they will teach us machine learning so should I do MLDL from ineuron ?? And can u suggest course which will be plus point for my career
Go for that MLDL Course from ineuron...You will have vast knowledge
@@mukeshkund4465 amf I have one more question should I take MLDL from iNeuron or should I do it from the playlist which sir uploaded
If u are planning for job in AI or ML , then go for AppliedAI course..
if u are learning for your knowledge , u can consider Krish sir playlist or courses from Ineuron..
But missing values should be handled before or after splitting dataset into train and test data?
Saying theory is easy than pratical with theory
sir , for data columns which had more no. of zeros , we have to replace by mean,meadian, in numerical column. should we consider those zeros as missing values .
for my data set belongs to timerseries which hads spends vs sales columns in different week level
.i saw a column, spends in one channel is having too many zeros, what to do in this case?
❤❤
oh, so its "youtube's" channel. I was wondering why is he saying "youtush" channel😅😅
b6oaa
vyn.fyi