Correction: 3:23 The array should only have wt through wt5, ko1 through ko5. Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
Thank you, I was mentioning 3:23. Your videos are great. I am a medical doctor from Turkey and currently, I am planning a career change to data science and I have been watching your videos to get prepared for a data scientist position. Could you create a few videos regarding data science interviews if it is relevant for your channel content? Best Regards, Göktuğ Aşcı, MD.
@@keerthik3791 Unfortunately the random forest implementations for Python are really bad and they don't have all of the features. If you're going to use a random forest, I would highly recommend that you do it in R instead.
"Note: We use samples as columns in this example because... but there is no requirement to do so." "Alternatively, we could have used..." "One last note about scaling with sklearn vs scale() in R" This is some of the gold that sets StatQuest apart. Thank you! ❤
The fact that you said bam when the plot showed what we wanted really shows that even if you are a pro python programmer, you still feel happy when you code correct, relatableeee
One of the best videos ever made on this topic. This channel has helped me a lot in understanding machine learning in greater detail. Keep up the good work !!
Simply loving StatQuest. Concise, clear and fun videos. One point I noted while watching this video is that the latest version of sklearn PCA() will center the data for you, but not scale it. So if you just need centering for doing pca, you don't need to worry about preprocessing.
I learn so much better in Python for some reason, I think it's because it's more interactive and you can play around with the data! Good one. Stattttquueeeeeest.
Awesome. Please create more videos about how to implement the machine learning as well as data science concepts explained here into Python. That would be super helpful for us, in particular beginners.
Thanks for the tutorial! One thing I don't understand is why the PC1 can separate the wt and ko samples. Their gene expression values are generated in a same way.
@@statquest excellent going - really. difficult to know what's up and down in data science, and so i'm happy your videos cover subjects from mathematical concepts to code implementation. excellent spirit and explanations, again. (sorry about the superlative avalanche - in the vast ocean that's the net, it's difficult finding authoritative sources covering subjects well ) bests from Germany/Denmark ;)
Amazing video! I initially watched the video explaining PCA and i was mind-blown, thank you so much! I was hoping to ask if anyone on the comment section or even StatQuest if possible, would know how to implement PCA in a multivariate timeseries dataset and also "examine the loading scores" in such a dataset. Thanks in advance! :) P.S - extremely clueless on anything coding or ML, but Ive got to use PCA (and other dimensionality reduction methods) on my timeseries dataset. so would greatly appreciate any direction on how to proceed.
Hello Josh, Thank you for the amazing video! Quick question, at 9:18 how can I adapt "index=[*wt, *ko] for an excel input? Lets say that we have the same variables (Genes vs wt/ko) but in an excel file. How can I add these labels to the final plot (9:47)? Thank you again!!
I'm not sure I understand your question. You can export your data from excel and import it into python (or R or whatever). Or are you asking about something else?
Hi Joshua, thanks for that. really helpful. i'm quite new to python myself, and i'm trying to compile a PCA across a range of macro-economic factors (inflation,gdp,fx, policy rate etc.,), now in all that you've done above where is the display of the PCA i.e: the newly uncorrelated data set, is it the loading scores you printed? or the wt, and ko variables you plotted? Thanks
PCA does not have predictors and targets. All variables are just...variables. For more details about PCA, see: ruclips.net/video/FgakZw6K1QQ/видео.html
Hi Joshua, Thank a lot for a clear explanation and you walk me well each step. I have one question relating to PCA of Scikitlearn. Actually, you have said in your clip, but would like to ask to get a bit clearer. When using PCA of Scikitlearn, we must do train and test, is it right? The one in your clip is just part of it, right? I ask this because the result that I do following your step is different with the result from other programmes (CANOCO). Many thanks and look forward to hearing from you soon.
I appologise for one more question. I use your script in your Video to run with the data from the link here ("archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"). It was run well except the last step (chart). It shows error as the following. do you know what happen with data or the script? TypeError: cannot convert the series to Thank and sorry again for another question.
Hi, You are a lifesaver. I am trying to do PCA analysis on my own data but since every demo video either use the databases and you created your own data. I am missing some crucial steps, especially in defining index when i am doing it with my data. Will it be too much to ask few more videos on machine learning where you use the excel sheet data from your laptop.
Hi Josh Thank you for your efforts, really statquest is a magnificent channel , Could you please make video for Singular Value decomposition SVD. thanks
Hi Josh!, this is a very excellent video that helped me a lot!!! I have a question, what if PC3 PC4 is also essential? Do I need to draw 2 2-D graphs, or what do I need to do?
If you want to draw the PCs and the data, then you'll have to draw multiple graphs. Or you can use the projections from the first 4 PCs and input to a dimension reduction algorithm like t-SNE: ruclips.net/video/NEaUSP4YerM/видео.html
Great tutorial, sorry if my question may be ammature, but how did they differentiate WT and KO apart in the final PCA, I thought the data set was randomly generated?
It depends on the field you are in. I used to work in Genetics and this is the format they used. So it's always worth checking to make sure you have the data correctly oriented.
BAM!!! I understood what u said. I show my gratitude. But I have a query. I am confused with my dataset regarding which to consider a row and which as columns My dataset is regarding Phase measurement units (PMU) used in electrical grid or sort of the distribution lines we see around. One single PMU measures 21 electrical parameters for a timestamp. We use around Four PMU each measuring the 21 parameters at different locations at the same time continuously over a period of time. How can I arrange the above data for Performing PCU sir?
Sir those two case you mentioned that PCU would work is what I am also interested in calculating apart from the combination of all of the PMUs time stamp. Can u mention how to arrange the data (Rows and columns) for both of the mentioned viable cases? Thanking you so much!!You are really awesome sir
@@statquest Thank you very much for the clarification! I googled it, and seems that it's two different things, but sometimes they can be used interchangeably or be the same thing.
@@Cat_Sterling Yes, I guess it depends on how you want to use them and whether you divide by 'n' or 'n-1', but, at least on a conceptual level, they are the same.
Very concise, I will surely be coming back to this video, however I would like to know why PCA is able to group these two categories (wt and ko), when it's shown they are generated from the same random method. If all indexes were generated at the same time, I would get it, but as they are generated index by index, I seem not to be able to grasp it.
The trick is at 3:48. For each group, wt and ko, we select a different parameter for the poisson distribution and generate 5 measurements from each of those two different distributions. One set is for wt and the other set is for ko.
@@statquest I think my confusion comes from the fact that these will make the two groups different from one another (all w's different from ko's), but I wouldn't predict them to be similar within the group (wt1 is close in vertical to wt2, and to wt3...,), thus I tend to believe PCA should tell them apart, but not in exactly two groups (wt's vs ko's), I would predict more like two clouds instead of two "vertical line of points" in the 2-D.
@@3stepsahead704 Remember how PCA actually works, it finds the axis that has the most variation (which is between WT and KO) and focuses on that. And then find the secondary differences (among the WT and KO). However, because the differences between WT and KO are big, the scale on the x-axis will be much bigger than the scale on the y-axis. Thus, the samples will appear to be in a vertical line rather than spaced apart like you might guess they should be. In short, check the scales of the axes, they will explain the difference between what you think you see and what you expect.
First I want to thank you for all the awesome videos on PCA. I wanted to experiment with the demo code you published but I'm having problems in the data generation. The asterisk method used to stack wt and ko series is not working.
Firstly, very good video. Secondly I am running the code in some spectra I have and I get good PCA plot but the loading scores seems to be wrong. Have you any idea why?
Queation please... 09:50 wt and ko samples are both created with the same random function Poisson (10, 1000). Why are wt samples (and ko samples) more correlated??
Because rd.randrange(10, 1000) returns a random number between 10 and 1000. Once we get that random value, we use it to generate 5 values for the wt samples using a poisson distribution. Then we select another random value between 10 and 1000 and use it to generate 5 values for the ko using a different (because the random value is different) poisson distribution.
We select a random number between 10 and 1000 to be the mean of a poisson distribution. That's just the average value, and there can be larger and smaller values.
Hi Josh, thanks a lot for this video, it hit the point! I was wondering how to apply PCA in Python and came to this marvelous video. I made my own Jupyter Notebook following your instructions and came neatly. One minor problem (I really dunno if it is): my data came all the way around at the last steps. My WildType cluster was at the right, while the KO one was on the left. I tried several times because I thought it was due to randomness, but it always had the same shape. Any ideas on this? In other news, I'm from Argentina (I speak Spanish), so I was wondering if my Notebook was of any use to your Spanish-speaking viewers. If so, I would gladly share it! Cheers from Argentina, you've got a new Follower :)
If the shape is the same, it's OK. The orientation is somewhat arbitrary. I'm sure your notebook would be helpful. You can share it on GitHub, or send it to me and I'll add it to this repository. You can contact me through my website: statquest.org/contact/
Thank you Joshua for this wonderful explaination. Thanks a lot. I am using your code for generating a scree plot in the same way and I obtain this error: bar() missing 1 required positional argument: 'left'
Hi Josh Thank you for the video. It was a great tutorial. Just one question. What you called in the python code as loading_score, isn't in fact component score? It was score for each record (gene). Please correct me if I am wrong but isn't loading score the correlation between original fields (wt1, wt2 ...etc) and components? Thank you
Your channel has helped me immeasurably :) I just had one question here, and that is how precisely to go from the data sample array you start with, to the scaled data by hand? I tried but didn't get the correct answer? I did watch the PCA Explained video as well, but just didnt get the same result here and wonder if you could clarify exactly how it gets from one to the other... should be: scaled_data = (data['wt1'][i] - np.mean['wt1])/ np.std(data['wt1']) ... for each datapoint i and each column right? this isnt real code im just making a point that its z = x-u / s :)
Wow, thanks. One question: while verifying loading scores, I saw that 'False' command. Typically, for PCA, the data needs to be scaled, right? But false means it is not scaled, so I am confused. Please clarify this.
Correction:
3:23 The array should only have wt through wt5, ko1 through ko5.
Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
Thank you, I was mentioning 3:23. Your videos are great.
I am a medical doctor from Turkey and currently, I am planning a career change to data science and I have been watching your videos to get prepared for a data scientist position. Could you create a few videos regarding data science interviews if it is relevant for your channel content? Best Regards, Göktuğ Aşcı, MD.
@@GoktugAsc123 I'll keep that in mind.
@@keerthik3791 Unfortunately the random forest implementations for Python are really bad and they don't have all of the features. If you're going to use a random forest, I would highly recommend that you do it in R instead.
@@statquest Thankyou for the suggestion. I am good at Python, MATLAB. Can I do random forest in MATLAB? Or is learning R necessary here?
@@keerthik2168 I have no idea. I've never tried to do random forests in Matlab.
Dude you deserve a humanitarian award.
Thanks! :)
he is a good human in my eyes
@@joshuamcguire4832 bam!
@@rezab314 super bammm!!!
Not only the best PCA demonstration but also THE BEST introduction to Python. Hats off to you man!!
Thank you! :)
Whenever I search for some machine learning based explanation, I add 'by statquest' in it ^_^. Keep up the great work :')
Thank you very much!
@@statquest It's True I do the same thing ..thank you for your hard work
"Note: We use samples as columns in this example because... but there is no requirement to do so."
"Alternatively, we could have used..."
"One last note about scaling with sklearn vs scale() in R"
This is some of the gold that sets StatQuest apart. Thank you! ❤
Thank you! :)
I have been dabbling in data science for a while now, and only now learned that pandas stand for "panel data" xd
This channel never ceases to amaze
:)
Finally! You explain in the language I understand much better than English haha Thanks !!!
:)
but you are watching a tutorial \(-_-)/
YOU ARE SAVING MY DEGREE I LOVE YOU SO MUCH I CANT EVEN BELIEVE THIS IS THE SAME MATERIAL IM LEARNING IN MY MACHINE LEARNING CLASS RIGHT NOW.
Happy to help!
The fact that you said bam when the plot showed what we wanted really shows that even if you are a pro python programmer, you still feel happy when you code correct, relatableeee
bam! :)
One of the best videos ever made on this topic. This channel has helped me a lot in understanding machine learning in greater detail. Keep up the good work !!
Thank you!
Python. Now you're speaking my language :)
want me to take out my python?
@@HK-sw3vi ...weirdo
This channel is the best RUclips channel that I discovered. Thank you, sir!
Thanks!
Simply loving StatQuest. Concise, clear and fun videos. One point I noted while watching this video is that the latest version of sklearn PCA() will center the data for you, but not scale it. So if you just need centering for doing pca, you don't need to worry about preprocessing.
Thanks for the update!
I am watching the 1st minute and I'm already super excited. Thanks!!
Hooray!!!!!! :)
I learn so much better in Python for some reason, I think it's because it's more interactive and you can play around with the data! Good one. Stattttquueeeeeest.
Thanks! There should be a lot more Python videos and learning material out soon.
@@statquest looking forward to it :).
The only good step by step explanation I found on the web. Thank you so much!
Hooray!!! Thank you so much! :)
You've got the right formula for simple explanations. Teach me dawg
Thank you! :)
Hi Josh... Simply incredible all StatQuest videos... Triple Bam!!!
Thank you! :)
Awesome. Please create more videos about how to implement the machine learning as well as data science concepts explained here into Python. That would be super helpful for us, in particular beginners.
Thanks, will do!
You are one the best teacher that i've ever found. Thank you very much!
Thank you! :)
I push the like button even before I play the video. Because Josh never fails to amaze me.
bam!
6:31 using scikit PCA
8:35 plotting scree plot
10:37 loading scores for each principal component
Thanks for the time point! I'll add those to the description to divide the video into chapters.
Always can find a new and detailed explanation of steps from your videos! Thank you!
Thank you! :)
It's awesome to have the explanation based on python code. Thanks a lot!
No problem. I'm doing a lot more python coding these days, so hopefully I'll more of these "in python" videos.
Wow, your explanation is so clearly!!
Thank you! 😃
MAKE MORE PYTHON CONTENT PLEASE I LOVE IT
I'm working on it. :)
I like the way you plot the ratio of each PC~~
It is really easy to read!
BAM~~~~~~~~~~
Thank you!
Thank you Josh. Such practice is important and valuable!! And you really also taught some Python tricks that I don’t know.
Thank you! :)
Wish there were more statquest coding in python videos, they are the best! Much prefer to regular content although that is still really high quality
Noted.
Thanks for the tutorial! One thing I don't understand is why the PC1 can separate the wt and ko samples. Their gene expression values are generated in a same way.
Just stating I have the same question 2 years later.
Really appreciate this and would love to see more concepts implemented in python.
Thanks!
Wow Josh.. Thanks for that unpacking concept. I never knew that my whole life...
You bet!
Another Great StatsQuest in the books!
Thank you! This video helped a lot with what I'm trying to do.
Awesome!
This was so clear, thanks! Finally I can do PCA in python, BAM 😊 You DA BEST!
Thanks!
Hi Josh. The best PCA explanation. Thanks a lot :-) May GOD bless you 😊
Thank you! :)
Yes, May god bless you 100 times. May the troubles of today’s world not reach your doorstep. You’re a great person.
What a playlist, I simply loved it 😘
Thank you!
Woww! That was absolutely awesome!!! Thank you so much!
Glad you liked it!
Excellent work!!! 👏👏
Thanks a lot!
very much enjoy your explanation style.
many thanks for the great videos!
Thanks!
@@statquest excellent going - really.
difficult to know what's up and down in data science, and so i'm happy your videos cover subjects from mathematical concepts to code implementation.
excellent spirit and explanations, again.
(sorry about the superlative avalanche - in the vast ocean that's the net, it's difficult finding authoritative sources covering subjects well )
bests from Germany/Denmark ;)
@@miskaknapek BAM! :)
Man, u r a gem. I will pay for the knowledge later after my graduation bro. lol
Wow! Thank you! :)
Your videos are great! Thanks
Thanks!
This was a reallly good explanation using Python
As always a great presentation and the python code just give the extra bite...
Thanks!
You are the best!!!! It would be great if you could make a video on speculative decoding using medusa and quantization of neural networks in general
@statquest
I'll keep that in mind! :)
Amazing! this is so important, thanks a lot.
Thanks! :)
Good explanation. Thank you so much.
That's a cool one. The fact that observations are columns makes it so confusing though. I'm really used to the tidy data notation
Noted
i really like your clear explanation. please do some videos about deep learning and NLP.
I'm working on them.
@@statquest yeah! I am waiting for that
Amazing video! I initially watched the video explaining PCA and i was mind-blown, thank you so much! I was hoping to ask if anyone on the comment section or even StatQuest if possible, would know how to implement PCA in a multivariate timeseries dataset and also "examine the loading scores" in such a dataset. Thanks in advance! :)
P.S - extremely clueless on anything coding or ML, but Ive got to use PCA (and other dimensionality reduction methods) on my timeseries dataset. so would greatly appreciate any direction on how to proceed.
See: stats.stackexchange.com/questions/158281/can-pca-be-applied-for-time-series-data
Thanks for the great video! :)
:)
Thank you! I’ve been struggling with this problem for so long !
Hooray! I'm glad the video was helpful. :)
Thank you very much! Super helpful!
this is so good
Thank you!
Hello Josh, Thank you for the amazing video! Quick question, at 9:18 how can I adapt "index=[*wt, *ko] for an excel input? Lets say that we have the same variables (Genes vs wt/ko) but in an excel file. How can I add these labels to the final plot (9:47)? Thank you again!!
I'm not sure I understand your question. You can export your data from excel and import it into python (or R or whatever). Or are you asking about something else?
Incredible French accent “Poisson distribution” , I saw it three times 😆
:)
Python ε> now we are talking!
:)
COOOOOL, so easy to understand!
Hi Joshua, thanks for that. really helpful. i'm quite new to python myself, and i'm trying to compile a PCA across a range of macro-economic factors (inflation,gdp,fx, policy rate etc.,), now in all that you've done above where is the display of the PCA i.e: the newly uncorrelated data set, is it the loading scores you printed? or the wt, and ko variables you plotted? Thanks
Hi Joshua, Great Videos!
Thank you!
You're welcome! :)
great video! thanks for these!!! have you done a redundancy analysis and dbRDA plot video? thank you for contributing to our education
I haven't done that yet.
@@statquest let us know if you ever do! It would be a double bam from me. It just clicks the way you explain! Thank you again for your content!!!
Excelent tutorial!!
Thank you! :)
This video is really awesome! I am just confused on one thing, what are your predictors and what is your target?
PCA does not have predictors and targets. All variables are just...variables. For more details about PCA, see: ruclips.net/video/FgakZw6K1QQ/видео.html
dear instructor, will you release a python version of your ml course. supper fan here!
One day I will.
@@statquest hope that day comes quick. stay well.
Hi Joshua, Thank a lot for a clear explanation and you walk me well each step.
I have one question relating to PCA of Scikitlearn. Actually, you have said in your clip, but would like to ask to get a bit clearer. When using PCA of Scikitlearn, we must do train and test, is it right? The one in your clip is just part of it, right? I ask this because the result that I do following your step is different with the result from other programmes (CANOCO). Many thanks and look forward to hearing from you soon.
Thank a lot Joshua for your clear explanation. I hope and wish to see your new clip relating Train and Test PCA.
I appologise for one more question.
I use your script in your Video to run with the data from the link here ("archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"). It was run well except the last step (chart). It shows error as the following. do you know what happen with data or the script?
TypeError: cannot convert the series to
Thank and sorry again for another question.
Thank a lot Joshua for the link. It is really useful.
you are awesome bro
Hi, You are a lifesaver. I am trying to do PCA analysis on my own data but since every demo video either use the databases and you created your own data. I am missing some crucial steps, especially in defining index when i am doing it with my data. Will it be too much to ask few more videos on machine learning where you use the excel sheet data from your laptop.
I am a newbie in data science and programming. I am a Molecular Biologist who would love to learn machine learning.
I'll keep that in mind for a future video.
Awesome!
Thank you! :)
Please post some intuitions on sparse deconvolution and compressive sensing..Would love to understand your approach..❤️
wonderful
Thanks! :)
Hi Josh Thank you for your efforts,
really statquest is a magnificent channel ,
Could you please make video for Singular Value decomposition SVD.
thanks
Looking forward to Kernel PCA in Python or explanation!
Hi Josh!, this is a very excellent video that helped me a lot!!!
I have a question, what if PC3 PC4 is also essential? Do I need to draw 2 2-D graphs, or what do I need to do?
If you want to draw the PCs and the data, then you'll have to draw multiple graphs. Or you can use the projections from the first 4 PCs and input to a dimension reduction algorithm like t-SNE: ruclips.net/video/NEaUSP4YerM/видео.html
Is loading score eigenvalues? Wish to see a more linear algebra method of explaining pca!
For more details on how PCA works, see: ruclips.net/video/FgakZw6K1QQ/видео.html
Great tutorial, sorry if my question may be ammature, but how did they differentiate WT and KO apart in the final PCA, I thought the data set was randomly generated?
Early on we gave the rows and columns names and kept track of them.
generally, in ML, we use 'columns' as 'features(variables)' and ''rows' as 'examples', but in the video, it is inverse. but is is not a big deal.
It depends on the field you are in. I used to work in Genetics and this is the format they used. So it's always worth checking to make sure you have the data correctly oriented.
fantastic, like always.
I wonder how Poisson distribution caused each wt samples and ko samples to be correlated with each other?
Because we generated the data, I selected different lambda values for the wt from the ko samples.
BAM!!! I understood what u said. I show my gratitude. But I have a query.
I am confused with my dataset regarding which to consider a row and which as columns
My dataset is regarding Phase measurement units (PMU) used in electrical grid or sort of the distribution lines we see around.
One single PMU measures 21 electrical parameters for a timestamp.
We use around Four PMU each measuring the 21 parameters at different locations at the same time continuously over a period of time.
How can I arrange the above data for Performing PCU sir?
Sir those two case you mentioned that PCU would work is what I am also interested in calculating apart from the combination of all of the PMUs time stamp.
Can u mention how to arrange the data (Rows and columns) for both of the mentioned viable cases?
Thanking you so much!!You are really awesome sir
Thank you!!! When we are speaking about variation in PCA, is that the same as variance?
Yep.
@@statquest Thank you very much for the clarification! I googled it, and seems that it's two different things, but sometimes they can be used interchangeably or be the same thing.
@@Cat_Sterling Yes, I guess it depends on how you want to use them and whether you divide by 'n' or 'n-1', but, at least on a conceptual level, they are the same.
@@statquest Thank you so much again! Really appreciate your reply! Your channel helped me so much!!!
Very concise, I will surely be coming back to this video, however I would like to know why PCA is able to group these two categories (wt and ko), when it's shown they are generated from the same random method. If all indexes were generated at the same time, I would get it, but as they are generated index by index, I seem not to be able to grasp it.
The trick is at 3:48. For each group, wt and ko, we select a different parameter for the poisson distribution and generate 5 measurements from each of those two different distributions. One set is for wt and the other set is for ko.
@@statquest I think my confusion comes from the fact that these will make the two groups different from one another (all w's different from ko's), but I wouldn't predict them to be similar within the group (wt1 is close in vertical to wt2, and to wt3...,), thus I tend to believe PCA should tell them apart, but not in exactly two groups (wt's vs ko's), I would predict more like two clouds instead of two "vertical line of points" in the 2-D.
@@3stepsahead704 Remember how PCA actually works, it finds the axis that has the most variation (which is between WT and KO) and focuses on that. And then find the secondary differences (among the WT and KO). However, because the differences between WT and KO are big, the scale on the x-axis will be much bigger than the scale on the y-axis. Thus, the samples will appear to be in a vertical line rather than spaced apart like you might guess they should be. In short, check the scales of the axes, they will explain the difference between what you think you see and what you expect.
@@statquest Thank you very much for taking the time to explain this. I now get it!
First I want to thank you for all the awesome videos on PCA. I wanted to experiment with the demo code you published but I'm having problems in the data generation. The asterisk method used to stack wt and ko series is not working.
Which version of Python are you using? The code was written for Python 3 and I'm not sure the asterisk method works in Python 2.x.
Oh, you are right, I was accidentally using Python 2. Now it works well in Python 3
Hooray! I'm glad you got it working :)
Firstly, very good video. Secondly I am running the code in some spectra I have and I get good PCA plot but the loading scores seems to be wrong. Have you any idea why?
No idea.
Queation please...
09:50 wt and ko samples are both created with the same random function Poisson (10, 1000). Why are wt samples (and ko samples) more correlated??
Because rd.randrange(10, 1000) returns a random number between 10 and 1000. Once we get that random value, we use it to generate 5 values for the wt samples using a poisson distribution. Then we select another random value between 10 and 1000 and use it to generate 5 values for the ko using a different (because the random value is different) poisson distribution.
at 5:10 why do we scale our data?
I explain why we scale the data in this video: ruclips.net/video/oRvgq966yZg/видео.html
wow...so awesome..BAM!!!
Thanks! :)
4:46 Why the gene4,ko1 has a value over 1000 if the command says "get a random value between 0 and 1000?
Thanks for the value !!
We select a random number between 10 and 1000 to be the mean of a poisson distribution. That's just the average value, and there can be larger and smaller values.
@@statquest oh! i see!! thank you so much, I still learning about this
Hi Josh, thanks a lot for this video, it hit the point! I was wondering how to apply PCA in Python and came to this marvelous video.
I made my own Jupyter Notebook following your instructions and came neatly. One minor problem (I really dunno if it is): my data came all the way around at the last steps. My WildType cluster was at the right, while the KO one was on the left. I tried several times because I thought it was due to randomness, but it always had the same shape. Any ideas on this?
In other news, I'm from Argentina (I speak Spanish), so I was wondering if my Notebook was of any use to your Spanish-speaking viewers. If so, I would gladly share it!
Cheers from Argentina, you've got a new Follower :)
If the shape is the same, it's OK. The orientation is somewhat arbitrary. I'm sure your notebook would be helpful. You can share it on GitHub, or send it to me and I'll add it to this repository. You can contact me through my website: statquest.org/contact/
Thank you Joshua for this wonderful explaination. Thanks a lot.
I am using your code for generating a scree plot in the same way and I obtain this error: bar() missing 1 required positional argument: 'left'
Yes, I was using the original code given. I am using Python3, could that be the issue?
Hi Josh
Thank you for the video. It was a great tutorial. Just one question. What you called in the python code as loading_score, isn't in fact component score? It was score for each record (gene). Please correct me if I am wrong but isn't loading score the correlation between original fields (wt1, wt2 ...etc) and components?
Thank you
Hi, by any change, you have video about theory of PLS and how to implement it in Machine Learning?
I'll keep it in mind.
Your channel has helped me immeasurably :) I just had one question here, and that is how precisely to go from the data sample array you start with, to the scaled data by hand? I tried but didn't get the correct answer? I did watch the PCA Explained video as well, but just didnt get the same result here and wonder if you could clarify exactly how it gets from one to the other... should be: scaled_data = (data['wt1'][i] - np.mean['wt1])/ np.std(data['wt1']) ... for each datapoint i and each column right? this isnt real code im just making a point that its z = x-u / s :)
It depends on how the data are oriented. Sometimes it's in columns, sometimes rows. So check to make sure your data is in columns.
@@statquest for the test code you supplied, so columns, am I using the correct method?
@@RachelDance It sounds like it.
Thank you very much for this tutorial. Please can you explain how to get correlation matrix
With numpy, you use corrcoef().
@@statquest Thank you very much
Math learned statistics from Josh ;)
:)
Hi , Great video ..thanks. one question as there are 100 genes (features) wont there be 100 PCs ?
In theory, the answer is "yes", but in practice the answer is "no". To learn more about why, see: ruclips.net/video/oRvgq966yZg/видео.html
@@statquest Thanks 😊
Hello, thanks for the videos. I think you explain great. I have a question. How can we make rotations with this package?
You can multiply data by the loading values.
@@statquest Thanks for the response. That will allow me to make rotations such as varimax, quartimax, etc? :(
Wow, thanks. One question: while verifying loading scores, I saw that 'False' command. Typically, for PCA, the data needs to be scaled, right? But false means it is not scaled, so I am confused. Please clarify this.
The data are already all on the same scale.
What's about the source of dataset??
The dataset is created within the code.
@@statquest thanks for your quick reply
What will be the negative values in the loading scores indicates?
Loading scores are explained here: ruclips.net/video/FgakZw6K1QQ/видео.html
@@statquest Thank you.
Is scalling to be done for both test and train dataset?
Yes.