Lecture 4 "Curse of Dimensionality / Perceptron" -Cornell CS4780 SP17

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024
  • Cornell class CS4780. (Online version: tinyurl.com/eC... )
    Official class webpage: www.cs.cornell....
    Written lecture notes: www.cs.cornell....
    Past 4780 exams are here: www.dropbox.com/s/zfr5w5bxxvizmnq/Kilian past Exams.zip?dl=0
    Past 4780 homeworks are here: www.dropbox.co...
    If you want to take the course for credit and obtain an official certificate, there is now a revamped version with 118 new high quality videos, made just for this course, offered through eCornell ( tinyurl.com/eC... ). Note, however, that eCornell does charge tuition for this version.

Комментарии • 105

  • @michaelmellinger2324
    @michaelmellinger2324 2 года назад +18

    1:00 A few words about k-Nearest Neighbors
    2:00 Curse of dimensionality - by examining k neighbors in various dimensions
    11:00 Perhaps the high dimensional data is in a subspace or low dimensional sub-manifold
    12:55 A manifold is basically…locally Euclidean distances work but not globally
    15:45 Detect manifold by creating spheres
    16:30 Helps to think about the true dimensionality of the data
    17:55 Always good to try and reduce the dimensionality
    18:50 Demo - k-nearest neighbors
    21:30 Demo - Curse of dimensionality
    32:45 Advantages and disadvantages of KNN
    36:45 Perceptron
    38:10 Works better in high-dimensional space where points are far apart, but not low dimensional. Opposite of KNN
    39:35 Mathematically how do we define a hyperplane
    43:40 How to find the hyperplane
    46:00 Geometrically we are now saying the hyperplane goes through the origin. We removed b

  • @minhtamnguyen4842
    @minhtamnguyen4842 4 года назад +19

    You are my most favorite teacher in the whole world. Everytime i stumble on a difficult subject of ML and DL and get discouraged, I randomly rewatch your lectures and feel all inspired again. Thank you for being so great at teaching

  • @kirtanpatel797
    @kirtanpatel797 5 лет назад +9

    It's really great to learn about assumptions about the algorithms success and their limitations! Truly helps in determining better choice of Algorithms :)

  • @user-ks9bl8bz9w
    @user-ks9bl8bz9w Год назад +1

    Up until now is such a pleasure to listen to you, prof Kilian Weinberg. You formalized all the ideas I have been learning throughout the internet, books, and my prof's lecture at uni. I am quite excited for what's coming. I'll stay tuned in.

  • @ali75988
    @ali75988 4 года назад +7

    Sir, previous lectures were really very good and explained everything very well. The thing unique about your today's lecture was connecting why k-nearest can't be used in multi-dimensional objects, which other lectures are missing. Being said that, this lecture might be fine for some students but for me I learnt some German.

  • @MrSyncope
    @MrSyncope Год назад +2

    That is one of the best lectures I've seen so far on this topic! Thanks Prof! I now have to make the time to work through the whole course. Awesome teaching

  • @naifalkhunaizi4372
    @naifalkhunaizi4372 3 года назад +3

    Professor Kilian you're a legend!! Amazing lectures with a beautiful sense of humor

  • @ylee5269
    @ylee5269 5 лет назад +31

    Thank you Prof. Weinberger! such a good lecture!

  • @sergiujava
    @sergiujava Год назад

    What an eye-opening/insightful lecture (and series of lectures, more generally)! Thank you, prof. Weinberger. This class is the best/friendliest/most fun way to learn Machine Learning, by far.

  • @jaimecristalino
    @jaimecristalino 3 года назад +5

    Thank you for posting your videos! They're the best!

  • @chillmode9576
    @chillmode9576 5 лет назад +9

    "Any questions at this point?"👨‍🎓 "Raise your hand if this is making sense."👨‍🎓

  • @HuyNguyen-cv9zb
    @HuyNguyen-cv9zb 3 года назад +7

    You are simply a wonderful lecturer. The whole course is amazing.

  • @sudhanshuvashisht8960
    @sudhanshuvashisht8960 3 года назад

    In one of your homework problem in the 2017 Spring Folder, I couldn't grasp the second claim of Q1, i.e. "This supports our second claim: as the dimensionality increases, the distance growth between normally distributed points overwhelms the degree to which some of those points are closer together than others". I have proved that limit (4*sigma_d/u_d) approaches to 0 as d -> infinity, but its relation to the claim made above is puzzling me a bit (in fact, I would say I could not completely comprehend the claim made either). Incredibly thankful for all your previous replies, Prof. Kilian.

  • @muratcan__22
    @muratcan__22 5 лет назад +71

    raise your thumbs if this is clear

  • @user-me2bw6ir2i
    @user-me2bw6ir2i 2 года назад

    Thank you so much for your work!
    I was really curious about manifolds, but couldn't find any good explanation, and yours is just brilliant!

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes 3 года назад +5

    He killian this one

  • @KW-fb4kv
    @KW-fb4kv 5 месяцев назад

    A minor suggestion with the charts showing the distance vs dimensions at around the 29:00 minute mark.... I think at least one student was a bit misled because they didn't notice the x-axis changes so drastically because the peak of distribution visually appears to be in the same spot on each of the 6 charts.

  • @roktimjojo5573
    @roktimjojo5573 5 лет назад +6

    Perceptron starts from 36:50

  • @amsrremix2239
    @amsrremix2239 2 года назад

    Great lecture. Really helped me get a high level understanding of what’s going on

  • @nhpkm1
    @nhpkm1 5 лет назад +5

    I have a question about 35:30 . From what I know the complexity should be "n^2 * d " between each points you check d times distance , and each point is checked with n points (n-1)^2

    • @kilianweinberger698
      @kilianweinberger698  5 лет назад +7

      The (n-1)nd complexity applies if you compute the leave-one-out training error. If you have a test point, you need to compute exactly n distances, so you obtain a test-time complexity of O(nd).

  • @DavesTechChannel
    @DavesTechChannel 4 года назад +2

    great explanation of curse of dimensionality, thank you.

  • @mihnearomanovschi1444
    @mihnearomanovschi1444 4 года назад +3

    Hello, thank you for the lecture.
    Could you please comment more on the definition of uniform probability on the interval [0,1] to be (1 - 2 * epsilon) when you were explaining curse of dimensionality.
    It is approximately 10:30 time.

  • @SinghCoder
    @SinghCoder 5 лет назад +7

    Hello Sir!! Firstly thanks for sharing lectures so that we can see these.
    Secondly, regarding the math background thing you say in video like I could not find it at the course website or somewhere.
    Are there some notes or video of that class that we can look upon.
    Thanks in advance

  • @compilations6358
    @compilations6358 5 лет назад +10

    I have a doubt in the explanation of points being at the edges of hypercube of d dimension. You explained it by taking eqn (1-2*epsilon)^d as probability of point being inside for all the dimension, if we choose to calculate the probability of a point being at either edge, it would yield us (2*epsilon)^d which also decreases as epsilon

    • @kilianweinberger698
      @kilianweinberger698  5 лет назад +27

      Good question. There is a subtle but important difference. In order to be in the interior of the cube you have to be in the interior in _all_ dimensions. However to be at the edge of the cube, you have to be at the edge in at least _any single_ dimension. Your computation (1-2*epsilon)^d would compute the probability that you are close to an edge in every single dimension. That‘s very unlikely, as you point out. However the probability that you are near the edge in at least one dimension is 1-(1-2*epsilon)^d, which approaches 1 very quickly as d increases. Hope his clears things up!

    • @sureshbishnoi4947
      @sureshbishnoi4947 4 года назад

      @@kilianweinberger698 28:56 In high dimension space you always on the opposite corner of some edges
      The statement raises the question:
      Since the probability of being close to the corner of a hypercube is (1-2*epsilon)^d very low, then isn't it only very few points are very farthest(i.e., on opposite corner)? And the distance between two points on the edges of any dimension will very short? So K-nearest neighbor should hold in case if the points test point is near edge?

    • @haritejatatavarti9669
      @haritejatatavarti9669 4 года назад

      @@kilianweinberger698 I had the exact same question and your explanation makes it clear. I have a follow-up question, in high dimensional spaces if a point is more likely to be on the edges, wouldn't most of our test points to be predicted also be on edges? And if we have enough train points, wouldn't the k-NN still work fine since we have enough points around it? Although it fails on interior points would k-NN still work fine on test points at edges?

  • @AlanWil2
    @AlanWil2 4 года назад +4

    Wow! This is good stuff!

  • @allencp23
    @allencp23 3 года назад

    One thing he doesn't mention much is that the graphs of "The curse of dimensionality" in KNN models, is that the graphs are on different Y-axes. So that fist distribution curve would be unnoticeable on the 1,000-dimension chart. And the 100-dimension chart is twice as tall as the 10-dimension chart.
    Carry on...

  • @sudhanshuvashisht8960
    @sudhanshuvashisht8960 4 года назад +1

    When we are saying all the data points are on edges in high dimensional spaces, it seems a bit counter intuitive as we are also saying we have drawn those 'n' points from a uniform distribution. I know mathematically we did everything right, but intuition wise, can't think why there are no points in interior even though the distribution from which the points are drawn is uniform.

    • @kilianweinberger698
      @kilianweinberger698  4 года назад +1

      One intuition is that almost all the volume is along the edges. The "interior" has hardly any volume at all. So when you pick a point uniformly across the volume, you almost always pick a location close to an edge.

    • @sudhanshuvashisht8960
      @sudhanshuvashisht8960 4 года назад

      @@kilianweinberger698 brilliant. This makes it absolutely clear, thanks a lot.

  • @arunjolly5521
    @arunjolly5521 4 года назад +1

    Sir, it would be great if you could refer some resources to high dimensional spaces. I find it a bit difficult to understand some concepts on very high dimensions.

  • @easterPole
    @easterPole 5 лет назад +2

    It’s very helpful and quite detailed. Although I can’t find the exercises/assignments for this course on the course’s webpage. Would be of great help if someone can direct me to the same. Thanks.

  • @sugamgarg8252
    @sugamgarg8252 3 года назад +1

    Regarding plots around 22nd minute, when you plot same number of points in increasing dimensions, the density of points have gone down, can we still compare avg. point distance in 2 dimensional space with 20 dimensional space ? On a line 10 points appear closer than they would on a plane.

  • @rajeshs2840
    @rajeshs2840 4 года назад +4

    this guy in some sense awesome....

  • @anirbanghosh6328
    @anirbanghosh6328 4 года назад +12

    Something is missing in the beginning. I mean the proof

  • @phamngoclinh1373
    @phamngoclinh1373 5 лет назад +5

    06:18 the moment that I found science can be damn emotional

  • @bhushankumar5317
    @bhushankumar5317 3 года назад +1

    😊🙏 Wonderfull....

  • @SundaraRamanR
    @SundaraRamanR 4 года назад +2

    I have a doubt on the missed lecture about optimal Bayes classifier and the proof (notes: www.cs.cornell.edu/courses/cs4780/2017sp/lectures/lecturenote02_kNN.html). It says as n->infinity, x_NN and x_t become identical, with the picture suggesting the points are crowded together because of the large numbers. But the points can remain sparse even as the number of points goes to infinity, for eg prime numbers are infinite even though they are sparse.

    • @kilianweinberger698
      @kilianweinberger698  4 года назад

      Yes, good point. Actually most of the proof in the original nearest neighbor paper ( tinyurl.com/cover-hart ) is addressing exactly this point.

  • @hussainsalih3520
    @hussainsalih3520 2 года назад

    keep moving amazing discussion

  • @shmgaranganaoalmeda1712
    @shmgaranganaoalmeda1712 3 года назад +2

    the way you explain things is so delightful i love it take my hand in marriage good sir

  • @raghavgaur8901
    @raghavgaur8901 5 лет назад +1

    Great answer and also can you tell me is there any way in which we can know the number of intrinsic dimensions in our data such that it becomes easy to know whether to apply KNN directly or do PCA and then apply KNN

  • @laurasofiabayona2288
    @laurasofiabayona2288 2 года назад

    You are so amasing!

  • @hello-pd7tc
    @hello-pd7tc 4 года назад +1

    Day 4 √. A little confused about perceptron, especially the wx+b part, and also why perception works in the higher-dimensional space. Will watching Andrew Ng's videos and reading the books help?

    • @kilianweinberger698
      @kilianweinberger698  4 года назад

      I would look at the class notes first: www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote03.html
      But yes, watching more videos / reading books should also help. Good luck!

    • @hello-pd7tc
      @hello-pd7tc 4 года назад

      @@kilianweinberger698 Thank you, professor!

  • @java2379
    @java2379 Год назад

    Pr Kilian , i have a question please. Since you demonstrated that 'random points' tend to accumulate on the sides of the cube , and knowing the probability that a point exists 'in the middle is not null'.
    What kind of structure does those inner points require to have to be in there? I mean does it means that increasing dimensionality works like a filter where noise goes 'on the sides' and then structured information goes 'in the middle' which actually means points far from the sides could be very structured hence very interesting to analyze?

    • @kilianweinberger698
      @kilianweinberger698  11 месяцев назад +1

      The reason the inside is so empty is that you are only on the inside if *every* dimension has a value near the mean. The moment even a single dimension is far from the mean and near the edge, you are no longer on the inside.
      Imagine you toss a coin for each dimension so that you have a 90% chance of being in the middle region, and 10% chance of being near the outside edge. In 1D the chance of being in the middle is therefore very high (90%). In 10d the chance of being in the middle (in every one of the 10 dimensions) is 0.9^10=0.34. In 100d it is 0.9^100=0.00002656139889.
      Hope this helps.

    • @jumpingcat212
      @jumpingcat212 5 месяцев назад

      @@kilianweinberger698 Hi Prof. Weinberger, I got the idea that in high dimension the middle region is almost empty, but I don't get why the fact points are all on the edge volume actually matters and makes the k-nearest neighbor algorithm fails. It feels like when we are having bigger and bigger dimension, and if we don't do scale/normalization, then the distance between any two points are getting bigger and bigger inevitably. So if in high dimension distance between any points is far away, it seems like it's the fault we don't normalize, rather than points all live on the edge volume.
      Also in k-nearest neighbor, it feels like what we care about is these k points are "relatively" close to the test point "compared to" the remaining points. So even though in high dimension, these k nearest neighbor points living in the edge volume have distance close to 1 in some dimensions, why does it matter? It seems like they are still closer to the test point compared to other...
      Also, the curse of dimension that points all living in edge volume is based on the uniform distribution assumption. But we don't know what the datas' real distribution is. For example, if the data's distribution is each dimension is something like: it's probability in the middle is (1-2epsilon)^(1/d). Then points can still live in the middle of any dimension no matter what's the total dimension. So why is the curse of dimension a strong curse...

  • @roniswar
    @roniswar 2 года назад

    Hello prof'. I really liked the Curse of Dimensionality explanation. However, there something that I do not understand in the whole picture.
    I get that KNN is lacking accuracy in high dimensions due to the Curse of Dimensionality. But, I do not understand how transformers work well for high dimensional (usually few hundred dimensions) space of embedding words\sentences.
    Any point in a subspace could be seen as vector from the origin to that point. Hence, cosine similarity (which transformers work on) seems very similar to distance between points. What am I missing? Thanks again!!

  • @MrWadood007
    @MrWadood007 3 года назад

    Thanks a Kilian times!

  • @richaasenthil
    @richaasenthil 4 года назад

    Dear Prof @Kilian Weinberger:
    Can we use the 'Dimensionality and distance between neighbors' to explain why MCMC can't overcome the curse of dimensionality? , i.e., the convergence rate of MCMC grows exponentially with dimension d
    My thought process: In MCMC, to improve the convergence rate, the distance between the current and newly sampled parameters should increase with an increase in the dimensionality of parameter space. Is this right?

  • @dataaholic
    @dataaholic 4 года назад +3

    what are the topics that were covered in maths background lectures?

  • @saitrinathdubba
    @saitrinathdubba 5 лет назад +3

    Hello, Thanks alot for the lectures !! very informative :) and i guess there is one lecture missing after 3rd lecture, where you discuss briefly about K-NN, distance metrics... This lecture started off with curse of dimensionality, lecture on Bayes optimal classifier is missing.. once again thanks alot for the lectures :) :)

    • @KulvinderSingh-pm7cr
      @KulvinderSingh-pm7cr 5 лет назад +2

      read about it here : www.cs.cornell.edu/courses/cs4780/2017sp/lectures/lecturenote02_kNN.html

  • @sudhanshuvashisht8960
    @sudhanshuvashisht8960 3 года назад

    I've a question for Example given at 14:20: When data lies on some underlying manifold, do we use some other distance metrics than euclidean to calculate the 'true' distance? Otherwise (if we use euclidean metric), the point in the example is very likely to be considered as the nearest neighbour for the test point, despite being quite far away from the test point in terms of 'true' distance.

    • @sudhanshuvashisht8960
      @sudhanshuvashisht8960 3 года назад

      The question is in some sense related to your statement at 14:41, "For knn, we only work/look at k nearest neighbours". But in order to determine the k nearest neighbours, won't we need a 'true' distance metric to eliminate points further than kth nearest neighbour.

    • @kilianweinberger698
      @kilianweinberger698  3 года назад +1

      Oh yes totally. For example if data is sampled from the surface of a sphere you can use the spherical distance between two data points. In practice, it is often the case that you don't have any good distance metric that respects the manifold geometry, however.

  • @hussainvahanvaty3220
    @hussainvahanvaty3220 5 лет назад +1

    Due to the curse of dimensionality, the assumption of k-NN breaks down, as all the nearest neighbors also seem to be far away.
    The statement is fair enough, however, it raises the question:
    Since the k-NN will still be nearer to the test point than the other points, their distances will be less than those of the others'. They will still be the closest points to the test point, although further away. So why does the algorithm break down?
    eg. for 3-NN, suppose the closest neighbor is 1.50 units away, the second 1.51 units away, the third 1.52. Thus id the distance metric used is of the order of 0.01 units, the classifier should still work as the other points will be further away than this, even though the distance of the test point from the K-NN is around 1.50 (which is relatively large compared to the 0.01 units being used to distinguish other training points.
    If you could please clarify this for me. Thanks!

    • @hussainvahanvaty3220
      @hussainvahanvaty3220 5 лет назад +1

      My question is similar to that asked at 26:20. If you could please clarify the answer for me.

    • @kilianweinberger698
      @kilianweinberger698  5 лет назад +5

      Yes, that’s a common confusion. The problem is not that in high dimensions a test point won’t have a nearest neighbor, the problem is that the nearest neighbor won’t be very similar to the test point. In low dimensions the nearest neighbor is much much closer than the average distance to any other point, however in high dimensions it can be only marginally closer. So the fact that the labels are locally smooth doesn’t help you make predictions, because the nearest neighbor of a test point just isn’t local. Hope this helps.

    • @narindersingh10
      @narindersingh10 4 года назад

      Wow !

    • @mkutkarsh
      @mkutkarsh 3 года назад

      @@kilianweinberger698 woah, thanks

  • @siddheshphadke4297
    @siddheshphadke4297 3 года назад +3

    Thank you for these awesome lectures!
    I have a quick questions - Is there a video lecture explaining Proof of 1-NN convergences and Bayes optimal classifier (from class notes www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote02_kNN.html ) or were they part of math background lecture you mentioned at the beginning of this video?

    • @kilianweinberger698
      @kilianweinberger698  3 года назад

      Hmm, sorry, not sure. Maybe I skipped that proof that year. Can't remember. :-/

    • @siddheshphadke4297
      @siddheshphadke4297 3 года назад

      @@kilianweinberger698 Thanks for making these videos available for all, they help a lot in understanding the intuition/ logic behind machine learning algorithms!

  • @dankaxon4230
    @dankaxon4230 Год назад

    how come the algorithm to find the distance between the points in k-nearest neighbours is O(n*d)? I didn't get it, please help.

    • @kilianweinberger698
      @kilianweinberger698  Год назад

      For a test point, you have to go through all n training points and for each one compute the distance over all d dimensions. So the L2-distance between two vectors is O(d) and the L2-distance to n vectors is O(dn).

    • @dankaxon4230
      @dankaxon4230 Год назад

      @@kilianweinberger698 but then we are also doing the same process over all test points, that is, we first compute distance in O(d) then for each test point we calculate to all other points in O(n*d) then we have to further check that for all test points(like point 2,3...n) therefore it should have been O(n^2*d) in my opinion.

  • @jiahao2709
    @jiahao2709 4 года назад

    when will next course available? (except ML, DL). For example one course for gaussian process for ML? I really hope there will be another one. I will be your super fan,

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes 2 года назад

    Kilian please answer this doubt. This solution of adding 1 to the X matrix or dataset, creates another dimension, but now all datapoints will be at 1 in this new axis, correct? Isn't that strange, cause problems?

    • @kilianweinberger698
      @kilianweinberger698  2 года назад +1

      You can view it more as a mathematical trick. w'x+b then becomes v'z where z is [x;1] and v=[w;b]. The interpretation is that you add one dimension and you shift all your data by 1 in that dimension.

  • @narindersingh10
    @narindersingh10 4 года назад +1

    Great

  • @abhishekprajapat415
    @abhishekprajapat415 4 года назад +1

    Pls. Help
    Why is l^d is almost equal to k/n.
    Like is this some formula/expression.
    I searched many places but all I found is that it is just like this but not about why it is, So pls. Help.
    Thx.

    • @thunth58
      @thunth58 4 года назад +2

      as I understand, the high dimensional space of the data set is assumed to have a volume of (1^d). If all the n data points are uniformly distributed, the volume of each class (l^d) will be ~ (k/n)*(1^d) (l^d) = (k/n)

  • @officialstylechild
    @officialstylechild Месяц назад

    Raise your hand if that makes sense… crickets… ok moving on!

  • @TheSlayer9X
    @TheSlayer9X 4 года назад +2

    Any chance we can get access to those projects?

  • @jkjiang2301
    @jkjiang2301 2 года назад

    Just wondering does changing the distance metric help for avoiding the curse of dimensionality?

    • @kilianweinberger698
      @kilianweinberger698  2 года назад +1

      Sometimes, if the (pseudo-)distance projects out irrelevant dimensions. E.g. you could imagine you have 1000 dimensions, but only 3 are relevant to your classification problem. If your distance operates in those three dimensions only, you have avoided the curse.

  • @dhirajbagul1518
    @dhirajbagul1518 4 года назад

    Hello Sir! Thank you for the lecture, I have a question: Regarding KNN - What if we used different distance metric for calculating distances on different features in order to avoid the overall domination of the one feature with huge value?? and also can it help if we regularize all the features before finding distance?

    • @kilianweinberger698
      @kilianweinberger698  4 года назад +2

      Typically, what people do is to re-scale all features to be similar. E.g. such that they are all within [0,1] or to normalize them to have mean 0 and standard deviation of 1.
      The next step would be to learn a metric ( see for example en.wikipedia.org/wiki/Large_margin_nearest_neighbor )

    • @dhirajbagul1518
      @dhirajbagul1518 4 года назад

      @@kilianweinberger698 Okay(I thought about that later), Understood the example from wiki; Thank you so much sir!! And also thank you for the (valuable) notes.

  • @abhinavmishra9401
    @abhinavmishra9401 3 года назад

    did anyone find the Maths background lecture?

  • @gregmakov2680
    @gregmakov2680 2 года назад

    hahhah, dan dat roi cu oi :D:D:D k-NN still works well for high dimensional space :D it depends on how the distance is defined.

  • @raghavgaur8901
    @raghavgaur8901 5 лет назад +1

    What are the maximum number of dimensions on KNN works well

    • @kilianweinberger698
      @kilianweinberger698  5 лет назад +3

      Unfortunately it really depends on your data and how densely sampled it is. Also, what really counts is the intrinsic dimension, which is not that easy to characterize. So it can still work well with thousands of dimensions, as long as your data is intrinsically low dimensional.

    • @raghavgaur8901
      @raghavgaur8901 5 лет назад

      @@kilianweinberger698 Great answer and also can you tell me is there any way in which we can know the number of intrinsic dimensions in our data such that it becomes easy to know whether to apply KNN directly or do PCA and then apply KNN

  • @sumanchaudhary8757
    @sumanchaudhary8757 4 года назад +1

    one kilian data ...naaaaaaice =D=D=D

  • @quirtt
    @quirtt Год назад

    You are funny!

  • @abunapha
    @abunapha 5 лет назад +2

    Starts at 1:05

    • @josephs.7960
      @josephs.7960 4 года назад

      You couldn't wait one minute?

  • @hdang1997
    @hdang1997 4 года назад

    Anyone else facing problems with the linear algebra and vectors and all?

  • @chillmode9576
    @chillmode9576 5 лет назад +1

    great lecture i'm just having a hard time

  • @rodas4yt137
    @rodas4yt137 4 года назад

    Can anyone tell me what year this course is held at?

    • @kilianweinberger698
      @kilianweinberger698  4 года назад +1

      The videos are from Spring 2017. However you can look at the Spring 2018 class notes, as I fixed a few small bugs.

    • @rodas4yt137
      @rodas4yt137 4 года назад

      @@kilianweinberger698 Sorry, I probably misspoke! I meant: are your students in their second/third/fourth year?

  • @automatescellulaires8543
    @automatescellulaires8543 2 года назад

    Man, this guy is loud.

  • @alexandrestehlick4929
    @alexandrestehlick4929 5 лет назад +1

    The teacher opens to questions too much often, so the students are always making these empty questions to mark their existence. Lets move on.