Penn has become indispensable, I don't understand everything but he explores it all in such a methodical way that I always learn a little bit along the way
That last matrix looks like a rotation matrix, or more accurately a spiral. This honestly makes complete sense giving the usage of exponentials and trig functions. You can also write it as the complex number a ± bi, which is the exact same rotation. (Though whether you use plus or minus depends on which side you're looking at the plane from.)
That last application blew me away. Is that a much easier way to get that integral ? I think it would take a long time by heuristic trial an error methods.
The integral can be computed by using complex numbers, instead of integrating e^(ax)*cos(bx) (resp. e^(ax)*sin(bx)), you integrate e^((a+ib)x) and take the real component (resp. imaginary component) to get the result
The "normal" way would be to use integration by parts (I think two times?). The integral you're trying to solve will come up again and you can easily solve for it by manipulating the equation.
Looking at the inverse of [ [a, b]; [c, d] ] 1/det * [ [d, -b]; [-c, a] ] And next to it the 4x4 matrix for our commutator -T with the matrix A = [ [2, 3]; [-1, 0] ] becomes [ [0, -1, -3, 0]; [3, -2, 0, -3]; [1, 0, 2, -1]; [0, 1, 3, 0] ] This looks like some kind of kronecker product: Top right is 3 ⊗ [ [-1, 0]; [0, -1] ] and bottom left is (-1) ⊗ [ [-1, 0]; [0, -1] ]. So like in the inverse case, where they get multiplied by -1 but blown up from 1 to 2 dimensions. And the other two entries instead of swapping places they are sort of swapping places with the whole matrix A: Top left is (if we ignore the 1/det factor) -(A^(-1))^T Bottom right then is just A^T Maybe this coincidence is just because of the missing minus sign, so if we multiply the 4x4 matrix by -1 [ [0, 1, 3, 0]; [-3, 2, 0, 3]; [-1, 0, -2, 1]; [0, -1, -3, 0] ] and then the top right and bottom left are just the top right and bottom left of matrix A multiplied by the 2x2 identity matrix. And top left becomes (A^(-1))^T And bottom right becomes -A^T.
Well, what do you mean by that? Are there linear maps between infinite dimensional spaces ? Yes. Can they be represented by a matrix? Trivially no. Linear operator =/= matrix
Kind of. Yes, you can represent any linear operator in quantum mechanics using the solutions of the Schrödinger equation and get a matrix representation, and in principle, that matrix could be infinite dimensional, but there are some questions of technical character, so i recommend investigate it in quantum mechanics and functional analysis textbooks in order to get a proper understanding.
"Jesse... Jesse, listen to me! We have to transform!" "Yo, mister White... I don't think we can transform between infinite dimensional vector spaces." "Jesse... What the fuck are you talking about Jesse?"
@@juliandominikreusch8979 Yes, I mean linear mappings, which can be expressed as generalized matrices. Heisenberg formulated quantum mechanics in the form of "matrix mechanics", without actually realizing it was equivalent to Schrödinger's "wave mechanics". The discrete nature of quantum physics allows one to express physical states, e.g. energy levels, as infinite vectors, and physical operators - which correspond to classical quantities like momentum, energy or position - as matrices acting upon these vectors, and giving us other vectors. The analogy is powerful, and can be made rigorous, because even though you can't actually write down these infinite matrices, their elements are well-defined, just as the terms of an infinite series are well defined.
@@Nikolas_Davis but don't forget that not all the operators have discrete spectra. For example, the position and the momentum operators, or the kinetic energy in the case of free particles. In that case, there are some subleties (and yes, there is the Dirac delta, but remember, it is a distribution, what makes this problem a bit tricky).
I always enjoy his video, especially he is excellent in linking Linear Algebra to other fields like Calculus - never seen in the French Math which deals abstractly Linear Algebra, but never applied in other areas like Calculus.
Could you do a video on the limit lim n-->∞ BB(n)/TREE(n) where BB(n) is the busy beaver function and TREE(n) is the tree function? Also a video on uncomputable numbers would be cool as well.
6:06 when I watched this video through the first time I thought that left multiplication was wrong here, but if you consider integration to be the inverse of differentiation then you should right multiply here!
The mapping (and its inverse) between one element of the finite-dimensional v-space and the "vector" used to encode it, can be tricky to define. One positive thing about the matrices is that there are a lot of "tools", sharpened by years of usage, that can be used to easily establish the properties of the linear transformation itself.
Is there an obvious (or not-so-obvious) relationship between the 2x2 matrix A and the 4x4 matrix representing the commutator operation? My linear algebra knowledge isn't very extensive, but it seems like there must be something....
if you've got time and patience then you could carry out the same calculation michael did but for a general matrix instead of {{2, 3}, {-1, 0}} and you can discover the answer for yourself (I don't know the answer but I have neither time nor patience xD I can certainly see some patterns in the 4x4 matrix michael gets though)
In the commutator example, he takes rows of the 4x4 matrix and multiplied each row by the corresponding row entry of the vector in front. Doesn’t matrix multiplication require the COLUMNS of the second matrix to be used as the ‘multiplier’?
No, he multiplies each row of the 4x4 matrix with the whole column vector. The matrix isn't 'on the right' in the sense of matrix multiplication, it's a diagram showing an operator - the 4x4 matrix is placed where it is because it's the operator represented by the arrow below it. I believe this diagram is called a commutative diagram. The matrix operator itself is still a left multiplier.
That does not apply to octonion multiplication, which is non-associative. Matrix multiplication is always associative, whether or not it is communtative.
[Leibniz's contingency argument, clarified]: Ten whole, rational numbers 0-9 and their geometric counterparts 0D-9D. 0 and it's geometric counterpart 0D are: 1) whole 2) rational 3) not-natural (not-physical) 4) necessary 1-9 and their geometric counterparts 1D-9D are: 1) whole 2) rational 3) natural (physical) 4) contingent Newton says since 0 and 0D are "not-natural" ✅ then they are also "not-necessary" 🚫. Newton also says since 1-9 and 1D-9D are "natural" ✅ then they are also "necessary" 🚫. This is called "conflating" and is repeated throughout Newton's Calculus/Physics/Geometry/Logic. con·flate verb combine (two or more texts, ideas, etc.) into one. Leibniz does not make these fundamental mistakes. Leibniz's "Monadology" 📚 is zero and it's geometric counterpart zero-dimensional space. 0D Monad (SNF) 1D Line (WNF) 2D Plane (EMF) 3D Volume (GF) We should all be learning Leibniz's Calculus/Physics/Geometry/Logic. Fibonacci sequence starts with 0 for a reason. The Fibonacci triangle is 0, 1, 2 (Not 1, 2, 3). Newton's 1D-4D "natural ✅ = necessary 🚫" universe is a contradiction. Natural does not mean necessary. Similar, yet different. Not-natural just means no spatial extension; zero size; exact location only. Necessary. Newtonian nonsense will never provide a Theory of Everything. Leibniz's Law of Sufficient Reason should be required reading 📚.
even our minds can be represented in the form of what's called "Matrixial trans-subjectivity", it reveals the threads of trauma that tie us all together, allowing individuals to transform their unrelenting death drives to life drives
He's Beginning To Believe!
Shut the,..-smh
One of Us, One of Us, Gooble Gobble..
The greatest thing about this channel is that it is heavy on examples. This is the way I learn. I need things spelled out for me. Thank you!
Penn has become indispensable, I don't understand everything but he explores it all in such a methodical way that I always learn a little bit along the way
That last matrix looks like a rotation matrix, or more accurately a spiral. This honestly makes complete sense giving the usage of exponentials and trig functions. You can also write it as the complex number a ± bi, which is the exact same rotation. (Though whether you use plus or minus depends on which side you're looking at the plane from.)
Honestly that's why I love linear algedra so much. So versatile
17:22
Was confused at 7:45 because you seem to do XA - AX rather than AX - XA. But I get that you changed what A is to make A the input of T.
I think he reversed A and X in the commutation calculation?
Classic Penn
So he computed -T instead of T
This was a great video.
Thank you, professor.
the last example was NEAT
Have you ever had a dream...
Everything is IN the matrix. Matrix can become Everything.
That last application blew me away. Is that a much easier way to get that integral ? I think it would take a long time by heuristic trial an error methods.
The integral can be computed by using complex numbers, instead of integrating e^(ax)*cos(bx) (resp. e^(ax)*sin(bx)), you integrate e^((a+ib)x) and take the real component (resp. imaginary component) to get the result
@@alexisren365 Thanks for telling me this. It's very interesting . Is this method an example of Feynman integration? I think not.
The "normal" way would be to use integration by parts (I think two times?). The integral you're trying to solve will come up again and you can easily solve for it by manipulating the equation.
@@jorex6816 Yes ! I've seen that done before in the dim distant past. Thanks. 👍
Looking at the inverse of
[ [a, b];
[c, d] ]
1/det *
[ [d, -b];
[-c, a] ]
And next to it the 4x4 matrix for our commutator -T with the matrix A =
[ [2, 3];
[-1, 0] ]
becomes
[ [0, -1, -3, 0];
[3, -2, 0, -3];
[1, 0, 2, -1];
[0, 1, 3, 0] ]
This looks like some kind of kronecker product: Top right is 3 ⊗ [ [-1, 0]; [0, -1] ] and bottom left is (-1) ⊗ [ [-1, 0]; [0, -1] ]. So like in the inverse case, where they get multiplied by -1 but blown up from 1 to 2 dimensions.
And the other two entries instead of swapping places they are sort of swapping places with the whole matrix A:
Top left is (if we ignore the 1/det factor) -(A^(-1))^T
Bottom right then is just A^T
Maybe this coincidence is just because of the missing minus sign, so if we multiply the 4x4 matrix by -1
[ [0, 1, 3, 0];
[-3, 2, 0, 3];
[-1, 0, -2, 1];
[0, -1, -3, 0] ]
and then the top right and bottom left are just the top right and bottom left of matrix A multiplied by the 2x2 identity matrix.
And top left becomes (A^(-1))^T
And bottom right becomes -A^T.
“you think that’s calculus you’re doing
now?”
Nobody:
Literally nobody:
Linear algebra teachers be like: everythings a matrix
You can even use matrices for mappings between infinite dimensional vector spaces. Heisenberg did it!
Well, what do you mean by that? Are there linear maps between infinite dimensional spaces ? Yes. Can they be represented by a matrix? Trivially no. Linear operator =/= matrix
Kind of. Yes, you can represent any linear operator in quantum mechanics using the solutions of the Schrödinger equation and get a matrix representation, and in principle, that matrix could be infinite dimensional, but there are some questions of technical character, so i recommend investigate it in quantum mechanics and functional analysis textbooks in order to get a proper understanding.
"Jesse... Jesse, listen to me! We have to transform!"
"Yo, mister White... I don't think we can transform between infinite dimensional vector spaces."
"Jesse... What the fuck are you talking about Jesse?"
@@juliandominikreusch8979
Yes, I mean linear mappings, which can be expressed as generalized matrices. Heisenberg formulated quantum mechanics in the form of "matrix mechanics", without actually realizing it was equivalent to Schrödinger's "wave mechanics".
The discrete nature of quantum physics allows one to express physical states, e.g. energy levels, as infinite vectors, and physical operators - which correspond to classical quantities like momentum, energy or position - as matrices acting upon these vectors, and giving us other vectors. The analogy is powerful, and can be made rigorous, because even though you can't actually write down these infinite matrices, their elements are well-defined, just as the terms of an infinite series are well defined.
@@Nikolas_Davis but don't forget that not all the operators have discrete spectra. For example, the position and the momentum operators, or the kinetic energy in the case of free particles. In that case, there are some subleties (and yes, there is the Dirac delta, but remember, it is a distribution, what makes this problem a bit tricky).
Great video
I always enjoy his video, especially he is excellent in linking Linear Algebra to other fields like Calculus - never seen in the French Math which deals abstractly Linear Algebra, but never applied in other areas like Calculus.
What is French math? Is that channel?
Could you do a video on the limit lim n-->∞ BB(n)/TREE(n) where BB(n) is the busy beaver function and TREE(n) is the tree function? Also a video on uncomputable numbers would be cool as well.
6:06 when I watched this video through the first time I thought that left multiplication was wrong here, but if you consider integration to be the inverse of differentiation then you should right multiply here!
I remember doing this to calculate the half derivative of cos(x)
The mapping (and its inverse) between one element of the finite-dimensional v-space and the "vector" used to encode it, can be tricky to define. One positive thing about the matrices is that there are a lot of "tools", sharpened by years of usage, that can be used to easily establish the properties of the linear transformation itself.
Is there an obvious (or not-so-obvious) relationship between the 2x2 matrix A and the 4x4 matrix representing the commutator operation? My linear algebra knowledge isn't very extensive, but it seems like there must be something....
Perhaps it even is a linear relationship given by a (4x4)x(2x2) = 16x4 Matrix
if you've got time and patience then you could carry out the same calculation michael did but for a general matrix instead of {{2, 3}, {-1, 0}} and you can discover the answer for yourself (I don't know the answer but I have neither time nor patience xD I can certainly see some patterns in the 4x4 matrix michael gets though)
Nah. Technically everything is a tensor.
Great video but audio was a bit low.
دائما نستفيد.شكرا جزيلا لكم.
Estoy totalmente de acuerdo.
Samaa mieltä.
No podría estar más de acuerdo con vos también.
السلام عليكم ورحمة الله وبركاته هل يمكن أن تنصحني ببعض الكتب بالنسبة السلك أول جامعي
Around 8:00 I think you've accidentally swapped A and X
In the commutator example, he takes rows of the 4x4 matrix and multiplied each row by the corresponding row entry of the vector in front. Doesn’t matrix multiplication require the COLUMNS of the second matrix to be used as the ‘multiplier’?
No, he multiplies each row of the 4x4 matrix with the whole column vector. The matrix isn't 'on the right' in the sense of matrix multiplication, it's a diagram showing an operator - the 4x4 matrix is placed where it is because it's the operator represented by the arrow below it. I believe this diagram is called a commutative diagram. The matrix operator itself is still a left multiplier.
Hi Dr.!
Interesting , nice job 🙂👍
Hi,
These glances on the right all the time are not very pleasant, it looks like Numberphile 🤣
Hi Michael,
Pls suggest a book on linear algebra which has these kinds of theory and examples. Thanks
You look like a young John McEnroe.
The video is missing the most central idea: that the matrix can be constructed by examining what the linear transformation does to the basis elements
How would that work with the linear map from M(2x2) to M(2x2)?
What? He literally does exactly that in the final example!
@@jorex6816 The basis can be the 4 matrices with a 1 in a slot and 0 in others, no?
the ans. at 6:39 makes no sense to me: a 3 X 1 vector times a 1 X 3 vector eq. a single value ????
Yeah, it should be the other way around. A 1x3 vector times a 3x1 vector gives a 1x1 single value.
That does not apply to octonion multiplication, which is non-associative.
Matrix multiplication is always associative, whether or not it is communtative.
If it’s non-associative, it’s not linear
@@bruh-pj3kq I'm not sure what you mean by linear in this context.
[Leibniz's contingency argument, clarified]:
Ten whole, rational numbers 0-9 and their geometric counterparts 0D-9D.
0 and it's geometric counterpart 0D are:
1) whole
2) rational
3) not-natural (not-physical)
4) necessary
1-9 and their geometric counterparts 1D-9D are:
1) whole
2) rational
3) natural (physical)
4) contingent
Newton says since 0 and 0D are
"not-natural" ✅
then they are also
"not-necessary" 🚫.
Newton also says since 1-9 and 1D-9D are "natural" ✅
then they are also
"necessary" 🚫.
This is called "conflating" and is repeated throughout Newton's Calculus/Physics/Geometry/Logic.
con·flate
verb
combine (two or more texts, ideas, etc.) into one.
Leibniz does not make these fundamental mistakes.
Leibniz's "Monadology" 📚 is zero and it's geometric counterpart zero-dimensional space.
0D Monad (SNF)
1D Line (WNF)
2D Plane (EMF)
3D Volume (GF)
We should all be learning Leibniz's Calculus/Physics/Geometry/Logic.
Fibonacci sequence starts with 0 for a reason. The Fibonacci triangle is 0, 1, 2 (Not 1, 2, 3).
Newton's 1D-4D "natural ✅ =
necessary 🚫" universe is a contradiction.
Natural does not mean necessary. Similar, yet different.
Not-natural just means no spatial extension; zero size; exact location only. Necessary.
Newtonian nonsense will never provide a Theory of Everything.
Leibniz's Law of Sufficient Reason should be required reading 📚.
even our minds can be represented in the form of what's called "Matrixial trans-subjectivity", it reveals the threads of trauma that tie us all together, allowing individuals to transform their unrelenting death drives to life drives
💊🐇🕳🔋🍪📡
#BeyondTheGlitch
asnwer=1 os isit 🤣🤣🤣🤣
thanks! 🤗🤗x100
🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗🤗