Gauss 2.0

Поделиться
HTML-код
  • Опубликовано: 23 окт 2024

Комментарии • 92

  • @francaisdeuxbaguetteiii7316
    @francaisdeuxbaguetteiii7316 3 года назад +19

    Dr Peyam spoils us too much with this amazing content

  • @algorithminc.8850
    @algorithminc.8850 3 года назад +19

    Quite useful ... for performance computing and hardware development, always looking at how to keep integers until I have to use floating point ... Thank you ...

    • @AaronRotenberg
      @AaronRotenberg 3 года назад +1

      Before you start trying to reinvent how to do high-performance Gaussian elimination with integers only, I recommend doing a fair amount of research... this is a _very_ well-studied topic. And it's easy to design pathological matrices that exponentially blow up the precision you have to store unless you use a clever algorithm - see rjlipton.wordpress.com/2015/01/14/forgetting-results/

  • @gamedepths4792
    @gamedepths4792 3 года назад +14

    This way too good to be true!
    This both saves time AND reduces mistakes which are extremely essential for competitive exams!

  • @mayankshukla1274
    @mayankshukla1274 3 года назад +4

    It is a very good way to find the solutions of linear equations without getting fraction.Thank you sir

  • @MrCigarro50
    @MrCigarro50 3 года назад +2

    Just amazing. I did not know this technique, but you have made my teaching a lot more enjoyable. Those fractions made me and my students going crazy.

  • @LunaPaviseSolcryst
    @LunaPaviseSolcryst 3 года назад +7

    I typically like to do an extra step where I find the LCM between the two numbers I'm trying to get rid of (so 6 for 2 and 3) then in the 2nd step I eliminate them by subtracting rows. It's more steps but each step I'm less likely to make a mistake since the additions between rows will always be by a factor of 1 or -1 and the multiplications ought to always make the two numbers the same, so I can check if the other entries were not multiplied correctly. The trick is to break down the problem into steps that your brain has optimized like -6 + 6 = 0.

    • @SlipperyTeeth
      @SlipperyTeeth 3 года назад +1

      I believe that this is effectively the same method. It just hides the multiply step and doesn't change the pivot row. Actually, yours uses the LCM, whereas this one just straight multiplies.

  • @cyto3338
    @cyto3338 3 года назад +1

    Gauss would really be proud, thank you for this amazing method !

  • @6754bettkitty
    @6754bettkitty 3 года назад +2

    When I first saw this, I was reminded of Cramer's Rule. I'll call it a hybrid between Gaussian Elimination and Cramer's Rule.

    • @GRBtutorials
      @GRBtutorials 3 года назад

      LOL, this was my first thought too. The proof is probably similar.

  • @bprpfast
    @bprpfast 3 года назад +2

    Yoooo now we need Dr K to be featured on this channel!!

    • @drpeyam
      @drpeyam  3 года назад +1

      We should!!!

  • @MuPrimeMath
    @MuPrimeMath 3 года назад +5

    Wow, this is so cool!

  • @bigcheese6855
    @bigcheese6855 3 года назад

    I'm about to take Linear Algebra starting this Monday. I have no clue what all of this is just yet but I'm saving this so I can reference it later on this quarter. Thank you, and great work!

  • @Justiin_rm
    @Justiin_rm 3 года назад +1

    that is beautiful. i wish i knew this when in mathematics class. thank you Dr Peyam.

  • @edgardojaviercanu4740
    @edgardojaviercanu4740 3 года назад

    I am astounded. A beautiful method.

  • @sharpnova2
    @sharpnova2 3 года назад

    just like you, i love the forward aspect but have no intention of that backwards component!
    i intend to implement this in code for fun. very neat trick. i think i understand why it works too. glad my linear algebra still feels fresh
    thank you for the great content and enthusiastic math, you're awesome

  • @stephsteyn4638
    @stephsteyn4638 3 года назад

    This is an excellent method! I wish I could use it in my linear algebra course but my lecturer requires that at each step I indicate which Elementary Row Operation was performed.

  • @theproofessayist8441
    @theproofessayist8441 3 года назад +2

    Now we need a proof presentation that this algorithm works!!!! :) - YAY hate making arithmetic mistakes with Gaussian Elimination!

    • @drpeyam
      @drpeyam  3 года назад +2

      The proof is easy, try it out

  • @dr.rahulgupta7573
    @dr.rahulgupta7573 3 года назад

    Simple and clear presentation. Excellent ! DrRahul Rohtak Haryana India has

  • @ahmedmghabat7982
    @ahmedmghabat7982 3 года назад +1

    Just after two days of following , you deserve subscribtion❤

  • @tambuwalmathsclass
    @tambuwalmathsclass 3 года назад

    Dr. You are doing a great job.
    What name should we call this method?

    • @drpeyam
      @drpeyam  3 года назад

      Koster’s Method haha

    • @tambuwalmathsclass
      @tambuwalmathsclass 3 года назад

      @@drpeyam 😁 Named after his name😁 thank you Dr.

  • @shaycorvo4290
    @shaycorvo4290 3 года назад

    Wow thank you sir for this amazing technique....

  • @SlipperyTeeth
    @SlipperyTeeth 3 года назад +3

    The use of the determinant here seems like it might just be a coincidental shorthand. It will haunt me unless someone finds a generalization that uses determinants of higher order matrices. But I don't even have the slightest idea of what it would even have to generalize - Guassian Elimination itself?

    • @AaronRotenberg
      @AaronRotenberg 3 года назад

      I think the algorithm in the video is just doing some shenanigans with Cramer's rule or matrix minors. Gaussian elimination will be exponentially faster than that for larger matrices.

    • @SlipperyTeeth
      @SlipperyTeeth 3 года назад +1

      @@AaronRotenberg
      I don't think you understand the method. Each entry only requires the determinant of a 2×2 matrix regardless of the size of the original matrix.
      Here is an explanation of the method I gave elsewhere:
      So you have a pivot row and a row you want to "update"; first, simultaneously multiply the pivot row by the first number in the "update" row and the "update" row by the first number in the pivot row; then, do the usual Gaussian Elimination on those two rows.
      What you've done is the simplest way to ensure that you get only integers in the rows, because now the number you want to turn into a 0 in the "update" row is the same in absolute value as the first number in the pivot row, so Gaussian Elimination amounts to adding/subtracting integers.
      At the same time, if you look at what you've done term by term in the "update" row, it's exactly the determinant described in the video, because you are just multiplying the respective terms by the first numbers in the other row before subtracting.
      I believe it is equivalent to standard Gaussian elimination in terms of computation time. If there is a connection to Cramer's rule or matrix minors, I would appreciate an explanation.

  • @bprpfast
    @bprpfast 3 года назад

    Wow!!

  • @jaikumar848
    @jaikumar848 3 года назад +3

    Hello Dr payam ! Any trick or shortcut to find inverse of matrix?

  • @paperpen5766
    @paperpen5766 3 года назад

    Impressionnant !

  • @rezamiau
    @rezamiau 3 года назад

    Absolutely incredible!!
    Does this method work with 4×4 systems as well ?

    • @drpeyam
      @drpeyam  3 года назад

      Yes it does!

  • @1willFALL
    @1willFALL 3 года назад +3

    Really clever technique, does this work for all three by three matrice? What about the inconsist case or infinite solutions, how would this work?

    • @nournote
      @nournote 3 года назад +2

      I guess you would find at some point a line with all zeros : 0 0 0 | 0
      And in the case of 0 solutions, something contradictory like : 0 0 0 | a (a≠0)

    • @drpeyam
      @drpeyam  3 года назад +1

      It works for any kind of matrices

  • @cepatwaras
    @cepatwaras 3 года назад +1

    this is awesome technique. I wish I knew it when in high school.

  • @carterwoodson8818
    @carterwoodson8818 3 года назад +2

    Does this method have a name? I feel like ive heard this referenced as montante's method?

  • @AlfonsoNeilJimenezCasallas
    @AlfonsoNeilJimenezCasallas 3 года назад

    Determinants are an interesting toolkit for solving linear algebra problems 😁

  • @phat5340
    @phat5340 3 года назад +3

    Will you ever explain why this works plz

    • @LaerteBarbalho
      @LaerteBarbalho 3 года назад +5

      It's just the Gaussian Elimination with a factor to eliminate the fractions.
      Let's suppose you have a 2 x 2 system:
      a11x + a12y = b1
      a21x + a22y = b2
      To get a 0 in place of a21, by normal Gaussian Elimination, you multiply row 2 by -(a11/a21) and sum it with row 1:
      (-a11/a21*a21 + a11)x + (-a11/a21*a22+a12)y = (-a11/a21*b2 + b1),
      You can rewrite that as:
      0 x + (-1/a21)*(a11*a22-a12*a21)y = (-1/a21)*(a11*b2 - b1*a21)
      and you just multiply by (-a21):
      0 x + (a11*a22-a12*a21)y = (a11*b2 - b1*a21), and you can see the determinants right there.
      I'm no mathematician, but I hope I could help you.

  • @sarkarsubhadipofficial
    @sarkarsubhadipofficial 3 года назад

    Great sir❤️
    Love from India

  • @yilmazkaraman256
    @yilmazkaraman256 3 года назад

    Nice one

  • @JimmyCerra
    @JimmyCerra 3 года назад

    This is very interesting! I am taking Linear Algebra next term. Thank you! I have a question, professor. Does this work well with larger matrices? Or only systems of 3 linear equations of 3 variables?

    • @drpeyam
      @drpeyam  3 года назад +2

      It works for any system

  • @alvarezjulio3800
    @alvarezjulio3800 3 года назад

    Oh Master Gauss look is awesome!

  • @AriosJentu
    @AriosJentu 3 года назад +5

    Interesting technique, but how about proof that this is works. I think it maybe not too hard to do, because it uses basic row-summing with common multipliers for rows, etc. Interesting, how they relates with determinant. Thank you.

    • @drpeyam
      @drpeyam  3 года назад +6

      No the proof is easy, try it out with a matrix
      a b c
      d e f
      And first row reduce, and then try this trick, and you’ll see it’s the same

    • @nathanisbored
      @nathanisbored 3 года назад

      @@drpeyam does it have to be a square matrix (w/ augmented column)?

    • @drpeyam
      @drpeyam  3 года назад

      @nathanisbored No I don’t think so, it works for any matrix

  • @carlosgiovanardi8197
    @carlosgiovanardi8197 3 года назад

    awesome!!

  • @sanjayk9624
    @sanjayk9624 3 года назад

    I like this method

  • @我妻由乃-v5q
    @我妻由乃-v5q 3 года назад

    Great!

  • @mimithehotdog7836
    @mimithehotdog7836 3 года назад +2

    4:22 Magic!

  • @tzonic8655
    @tzonic8655 3 года назад +1

    I wonder if i can use this in my linear algebra finals

  • @makavelix7767
    @makavelix7767 3 года назад

    I like your line which i believe 😊😊

  • @rikhalder5708
    @rikhalder5708 3 года назад +6

    Smart handsome Guess 😂

  • @abhilashsaha4590
    @abhilashsaha4590 3 года назад +1

    Amazing! But how does this method work?

    • @JoachimFavre
      @JoachimFavre 3 года назад +2

      It is just a regular Gaussian elimination, but by only doing multiplication. For example:
      (2 5 | 3) (6 15 | 9)
      (3 2 | 7) becomes (6 4 | 14)
      We multiplied the first row by 3 and the second one by 2. This way, you can just substract them. Dr. Peyam's method is exactly the same, just doing two steps at once, with a determinant. I don't know if I'm very understandable, do not hesitate if you have any question x)

    • @abhilashsaha4590
      @abhilashsaha4590 3 года назад

      @@JoachimFavre Thank you I understand now.

  • @OssianDrums
    @OssianDrums 3 года назад

    Lol you're a genius. I needed it, thanks!

  • @JSSTyger
    @JSSTyger 3 года назад

    For me, calculating determinants is a tedious process because of having to remember minus signs.

  • @thanhliemtu8071
    @thanhliemtu8071 3 года назад

    Can we also use this method to find the inverse of a matrix?

    • @thanhliemtu8071
      @thanhliemtu8071 3 года назад

      Ok I came back here from the future and the answer is definitely yes

  • @dragonsdream4236
    @dragonsdream4236 Год назад

    This method is incredible but I am unsure as to how it works

  • @MrCigarro50
    @MrCigarro50 3 года назад

    Does this work for square matrices of higher dimensions?

    • @drpeyam
      @drpeyam  3 года назад

      Yes, and in fact for any matrix

  • @User-gt1lu
    @User-gt1lu 3 года назад +2

    69k not bad.

  • @apoorvvyas52
    @apoorvvyas52 3 года назад

    Why does this method work?

  • @ethancheung1676
    @ethancheung1676 3 года назад

    Cool technique but How does it work?

    • @drpeyam
      @drpeyam  3 года назад

      Try it out for a general 2x3 matrix and you see the pattern :)

    • @SlipperyTeeth
      @SlipperyTeeth 3 года назад +1

      So you have a pivot row and a row you want to "update"; first, simultaneously multiply the pivot row by the first number in the "update" row and the "update" row by the first number in the pivot row; then, do the usual Gaussian Elimination on those two rows.
      What you've done is the simplest way to ensure that you get only integers in the rows, because now the number you want to turn into a 0 in the "update" row is the same in absolute value as the first number in the pivot row, so Gaussian Elimination amounts to adding/subtracting integers.
      At the same time, if you look at what you've done term by term in the "update" row, it's exactly the determinant described in the video, because you are just multiplying the respective terms by the first numbers in the other row before subtracting.

  • @lenamaral6055
    @lenamaral6055 3 года назад

    Great 'trick'!

  • @jeemain9071
    @jeemain9071 3 года назад

    Thumbnail 😎😎😎😎😎

  • @Jim-be8sj
    @Jim-be8sj 3 года назад +1

    Better: A\b

    • @LaerteBarbalho
      @LaerteBarbalho 3 года назад

      I've found the engineer...

    • @Jim-be8sj
      @Jim-be8sj 3 года назад +1

      @@LaerteBarbalho Close. I am in the applied math trade, but I teach engineers linear algebra and numerical analysis. It goes like this: A) Here's Gauss-Jordan, by hand. B) Here's LU factorization with pivoting, by computer. C) Forget what you know and use Matlab backslash. :)

  • @jamest8684
    @jamest8684 3 года назад +1

    Great, but I fail to see WHY this works.

    • @vigneshshaik3988
      @vigneshshaik3988 3 года назад

      He just applied row operations if u see clearly in those determinants

    • @jackiekwan
      @jackiekwan 3 года назад +3

      2 1 4 |5
      3 -1 2 |2
      to take the determinants of first 2 rows
      is the same as to multiply the 1st row by the 1st element of the 2nd row (i. e. ×3), multiply the 2nd row by the 1st element of the 1st row , i. e. ×2
      and then do the subtraction to eliminate the 1st element of the 2nd row
      the rest is done in the same concept

    • @chriswinchell1570
      @chriswinchell1570 3 года назад +1

      @@jackiekwan Well explained. It’s essentially a fast way of multiplying the rows by the least common multiple.

  • @umerfarooq4831
    @umerfarooq4831 3 года назад

    Dr πM
    Math made fun and easy

  • @رضاشریعت
    @رضاشریعت 3 года назад +1

    I will kepp using cramer's rule even after watching this video anyway😂

    • @lenamaral6055
      @lenamaral6055 3 года назад +1

      Check out the method of triangles.😉

  • @berzerksharma
    @berzerksharma 3 года назад +1

    I use 3d geometry to solve linear equations in 3 variables, will the society accept me?

  • @rjbeatz
    @rjbeatz 3 года назад +1

    Hello

  • @jayeshyedge7171
    @jayeshyedge7171 3 года назад

    Normal method is sufficient to do this question.

    • @SlipperyTeeth
      @SlipperyTeeth 3 года назад

      Are you sure about that? Computationally, integers and floating point numbers have different blind spots where they have to round. If given a problem in integers, converting to floating point might lead to unnecessary rounding errors.