At 30 years old and a college math major once upon a time, I've tried to get going with geometric algebra many times. The problem was that the subject was always presented as a bunch of dry triangles thrown around the page following some arbitrary rules. This series has finally got me understanding why we care about this beautiful algebra and how to use it to carry out useful computations. Thank you!
It's kinda comforting knowing that not only me, but also a Math major can get lost in the mathematical formalism of information sources. Sometimes I think that Math authors don't write to explain the novice to the topic, but to impress the pairs.
Awesome series. After studying linear/abstract algebra for a few years, this subject (which I previously had little exposure to) really wraps everything up in a very satisfying way. Thanks for your outstanding efforts. P.S. You should consider doing a series on Lie algebra. Bob
Thank you. This series is awesome. I've taken liner algebra and vector calculus but I'm far more impressed with geometric algebra. To see a system of math which has such far reaching applications is ridiculous. Why is this not the foundation of math post algebra? I feel like this as a foundation would bring all of these complicated math courses together into analysis. In tensor calculus there is the idea of a covector, a function, which inputs a vector and outputs a scalar proportional to the vector length. I was trying to look for an analog for geometric algebra, and then I found this video. I think that, the dot product of a vector, and the duel of the bi vector might be the operation analog in geometric algebra. Is there any work on something like that? As well, I am really interested to see interpretations of things like jacobians and stokes theorem using geometric calculus. Is there any analog for covariance and contravarience Thank you for making videos on this. What resource do you use to study this?
You are right, unifications are possible and they are far from trivial. For example, all integral theorems we know, including the complex numbers area, become one fundamental theorem. We do not need tensors, just study linear transformations in geometric algebra. You can solve eigenvalue problems with real numbers, introducing the concept of eigenblades. You can formulate the special theory of relativity and quantum mechanics in 3D Euclidean vector space and you do not need complex numbers in quantum mechanics. I suggest to start with the texts of David Hestenes and William Baylis, but also my intro book will be published soon at Springer.
Thanks so much for providing us with the keys to GA in your 16 lectures. Some lucky students will have you as their professor!! Your presentations were clear, interesting and always riveting. I consumed these lectures in about three days, they were that appealing. You have cleared up many of the little gaps in my understanding of the various objects and their interactions, I feel more confident now to read the literature. The question in my mind is when will the math and physics curricula be revised to accept GA as the universal language replacing the hodge-podge we currently use. I see so many applications in classical mechanics and even quantum computing (I'm actually planning next to take a stab at a paper by Doran on GA in Qubits). I am truly grateful for your effort in producing these excellent lectures. Thank you, thank you, thank you!!!
Geometric algebra does kind of have a cross product in arbitrary dimensions. It doesn't give a vector though. It's the commutator product and it usually takes bivectors and outputs a third bivector. B₁ × B₂ = ½(B₁B₂ - B₂B₁) For the 3D euclidean bivectors, it is exactly the cross product except it isn't lying about the type it operates on. Technically the bivector commutator product only exists in 3D and above since all bivectors in 2D commute with each other, and 1D and 0D don't have any bivectors. Treating bivectors as axes rather than planes (because even if you're using an axis of rotation, it is never a vector) the commutator product gives the axis orthogonal to the two inputs just like the cross product, but it's also just a part of the geometric product that composes reflections, so you're composing the two bireflections into a new rotation which just happens to have an axis orthogonal to the other two rotations. This is true even if the axes don't pass through the origin.
Thank you for these excellent series of videos as an introduction to GA. Do you have any plans to extend the series perhaps into geometric calculus? Also, how do you create the videos? ie what hardware and software do you use? Again, many thanks.
I wonder if you can give an introduction into mathematical logic one day. You did terribly good in your set theory course for it's an introduction, but quite comprehensive. I am not aware of any decent online math logic course which is very pity, because it's the fundamental of any reasonable thinking.
+ostihpem I might consider it someday - I just got burned out with set theory. But mathematical logic can only take you so far due to its formalisms and if one is interested in good reasoning in general, that would just be logic, not mathematical logic, i.e. how a student of philosophy would (or used to) learn logic.
Nice videos, thanks. Is there any way to represent surfaces in higher dimension by clifford algbera (a way of parametrizing them)? I am actually looking for the intersection of two constraints with 4 dimensions.
On the subject of pseudoscalars commuting with vectors, on the first example in which you proved e1I=Ie1, switching the leftmost e1 three times causes e1I=-Ie1 instead of the e1I=Ie1 that you showed on video. That would imply that 'I' also anticommutes with e1 in particular, and i find that odd at first. Is my line of thought correct or am I missing something?
+Zalemones1 But remember that e1 commutes with e1 so that first swap does not introduce a minus sign. A minus sign is picked up in a swap precisely because vectors with differing indices anticommute.
The first equation at 4:30 is problematic, the cross product stays invariant under space inversion, but the right-hand side changes a sign. For this reason, we should treat this equation as a new definition of the cross product.
I think you meant 34:00, but yes, it seems like there is an issue/subtlety here. The bivector transforms in the "axial vector" invariant under space inversion, but when we change it to a vector that's lost.
@@diribigal Yes, thank you, it is 34:00. The fact is that a simple product axb cannot be a vector, it behaves badly on space inversion. In addition, we have many quantities in physics that are badly defined, like magnetic field: it is a bivector, not a vector. Then we have a strange situation that a straight wire with current is not mirror symmetric, which means that Maxwell's theory is not mirror symmetric. The fact is, the invalid definition of the cross product caused a lot of problems in mathematics and physics. Geometric algebra is an answer for all such a problems.
@@miroslavjosipovic5014 Thanks for confirming. Thinking in non-Geometric Algebra terms, the thing in the video would be like (i●(j×k))(u×v). Or in physics, (up to scalars) a weird calculation like "multiply this magnetic flux by this angular momentum/magnetic field".
A composition of 3 reflections. In 2D, this could be a glide reflection. In 3D, it could be a point reflection. Exponentiation it doesn't really give a meaning, but you can still use it as a transformation.
The relation is simple, the unit pseudoscalar we use to define the duality operation, which we sometime denote by the hodge star. Simply, take any element of geometric algebra and multiply it by the inverse of the pseudoscalar.
At 30 years old and a college math major once upon a time, I've tried to get going with geometric algebra many times. The problem was that the subject was always presented as a bunch of dry triangles thrown around the page following some arbitrary rules. This series has finally got me understanding why we care about this beautiful algebra and how to use it to carry out useful computations. Thank you!
It's kinda comforting knowing that not only me, but also a Math major can get lost in the mathematical formalism of information sources. Sometimes I think that Math authors don't write to explain the novice to the topic, but to impress the pairs.
Awesome series. After studying linear/abstract algebra for a few years, this subject (which I previously had little exposure to) really wraps everything up in a very satisfying way. Thanks for your outstanding efforts. P.S. You should consider doing a series on Lie algebra.
Bob
32:17 was the aha! moment. Now I understand why the definition for the cross product is as such. Thanks for this!
Thank you for this serious effort to assist us in familiarity with geometric algebra.
Thank you. This series is awesome. I've taken liner algebra and vector calculus but I'm far more impressed with geometric algebra. To see a system of math which has such far reaching applications is ridiculous. Why is this not the foundation of math post algebra? I feel like this as a foundation would bring all of these complicated math courses together into analysis. In tensor calculus there is the idea of a covector, a function, which inputs a vector and outputs a scalar proportional to the vector length. I was trying to look for an analog for geometric algebra, and then I found this video. I think that, the dot product of a vector, and the duel of the bi vector might be the operation analog in geometric algebra. Is there any work on something like that? As well, I am really interested to see interpretations of things like jacobians and stokes theorem using geometric calculus. Is there any analog for covariance and contravarience Thank you for making videos on this. What resource do you use to study this?
You are right, unifications are possible and they are far from trivial. For example, all integral theorems we know, including the complex numbers area, become one fundamental theorem. We do not need tensors, just study linear transformations in geometric algebra. You can solve eigenvalue problems with real numbers, introducing the concept of eigenblades. You can formulate the special theory of relativity and quantum mechanics in 3D Euclidean vector space and you do not need complex numbers in quantum mechanics. I suggest to start with the texts of David Hestenes and William Baylis, but also my intro book will be published soon at Springer.
Thanks so much for providing us with the keys to GA in your 16 lectures. Some lucky students will have you as their professor!! Your presentations were clear, interesting and always riveting. I consumed these lectures in about three days, they were that appealing. You have cleared up many of the little gaps in my understanding of the various objects and their interactions, I feel more confident now to read the literature.
The question in my mind is when will the math and physics curricula be revised to accept GA as the universal language replacing the hodge-podge we currently use. I see so many applications in classical mechanics and even quantum computing (I'm actually planning next to take a stab at a paper by Doran on GA in Qubits).
I am truly grateful for your effort in producing these excellent lectures. Thank you, thank you, thank you!!!
+Keith Maynard
You're welcome. It's an ongoing series, so hopefully there will be more than 16, so long as I have the energy to produce them.
Geometric algebra does kind of have a cross product in arbitrary dimensions. It doesn't give a vector though. It's the commutator product and it usually takes bivectors and outputs a third bivector. B₁ × B₂ = ½(B₁B₂ - B₂B₁) For the 3D euclidean bivectors, it is exactly the cross product except it isn't lying about the type it operates on. Technically the bivector commutator product only exists in 3D and above since all bivectors in 2D commute with each other, and 1D and 0D don't have any bivectors.
Treating bivectors as axes rather than planes (because even if you're using an axis of rotation, it is never a vector) the commutator product gives the axis orthogonal to the two inputs just like the cross product, but it's also just a part of the geometric product that composes reflections, so you're composing the two bireflections into a new rotation which just happens to have an axis orthogonal to the other two rotations. This is true even if the axes don't pass through the origin.
This video was exactly what I needed
Thank you for these excellent series of videos as an introduction to GA. Do you have any plans to extend the series perhaps into geometric calculus? Also, how do you create the videos? ie what hardware and software do you use? Again, many thanks.
@4:30 is shifting basis vector analogous to swapping rows in a matrix?
Thank you for this lecture.
I wonder if you can give an introduction into mathematical logic one day. You did terribly good in your set theory course for it's an introduction, but quite comprehensive. I am not aware of any decent online math logic course which is very pity, because it's the fundamental of any reasonable thinking.
+ostihpem
I might consider it someday - I just got burned out with set theory. But mathematical logic can only take you so far due to its formalisms and if one is interested in good reasoning in general, that would just be logic, not mathematical logic, i.e. how a student of philosophy would (or used to) learn logic.
Nice videos, thanks. Is there any way to represent surfaces in higher dimension by clifford algbera (a way of parametrizing them)? I am actually looking for the intersection of two constraints with 4 dimensions.
On the subject of pseudoscalars commuting with vectors, on the first example in which you proved e1I=Ie1, switching the leftmost e1 three times causes e1I=-Ie1 instead of the e1I=Ie1 that you showed on video. That would imply that 'I' also anticommutes with e1 in particular, and i find that odd at first. Is my line of thought correct or am I missing something?
Also thank you for this series of videos! These are amazing, very instructional content, really interesting to watch. Keep it going!
+Zalemones1
But remember that e1 commutes with e1 so that first swap does not introduce a minus sign. A minus sign is picked up in a swap precisely because vectors with differing indices anticommute.
Oh I see, it's obvious once you say it haha
Dear Mathoma. it is true that (I^(-1) = -I), then in minute (1:09:20) I guess you forgot to include the factor (I.) Isn't it?
The first equation at 4:30 is problematic, the cross product stays invariant under space inversion, but the right-hand side changes a sign. For this reason, we should treat this equation as a new definition of the cross product.
I think you meant 34:00, but yes, it seems like there is an issue/subtlety here. The bivector transforms in the "axial vector" invariant under space inversion, but when we change it to a vector that's lost.
@@diribigal Yes, thank you, it is 34:00. The fact is that a simple product axb cannot be a vector, it behaves badly on space inversion. In addition, we have many quantities in physics that are badly defined, like magnetic field: it is a bivector, not a vector. Then we have a strange situation that a straight wire with current is not mirror symmetric, which means that Maxwell's theory is not mirror symmetric. The fact is, the invalid definition of the cross product caused a lot of problems in mathematics and physics. Geometric algebra is an answer for all such a problems.
@@miroslavjosipovic5014 Thanks for confirming. Thinking in non-Geometric Algebra terms, the thing in the video would be like (i●(j×k))(u×v). Or in physics, (up to scalars) a weird calculation like "multiply this magnetic flux by this angular momentum/magnetic field".
This is a interpretation for hodge dual operator in G(3)
What happens if you try to rotate by something other than a bivector? E.g. (e1e2e3)v(e3e2e1), is there any meaning to this?
A composition of 3 reflections. In 2D, this could be a glide reflection. In 3D, it could be a point reflection. Exponentiation it doesn't really give a meaning, but you can still use it as a transformation.
What is the relation of the pseudo-scalar and the hodge star operator?
The relation is simple, the unit pseudoscalar we use to define the duality operation, which we sometime denote by the hodge star. Simply, take any element of geometric algebra and multiply it by the inverse of the pseudoscalar.
Music is on point.