Spinors for Beginners 11: What is a Clifford Algebra? (and Geometric, Grassmann, Exterior Algebras)
HTML-код
- Опубликовано: 9 июл 2024
- Full spinors playlist: • Spinors for Beginners
Leave me a tip: ko-fi.com/eigenchris
Powerpoint slide files + Exercise answers: github.com/eigenchris/MathNot...
Sudgylacmoe: / sudgylacmoe
Swift Introduction to Geometric Algebra: • A Swift Introduction t...
Swift Introduction to Spacetime Algebra: • A Swift Introduction t...
Bivector: / @bivector
Crystal-Ann McKenzie's thesis: scholar.uwindsor.ca/etd/5652/
0:00 - Introduction
2:57 - Grassmann Algebras (wedge product)
13:20 - Clifford Algebras
22:45 - Grassman vs Clifford Algebras
26:35 - Abstract definitions of Algebras
Error at 19:17: each of "i", "j", and "k" should have a minus sign in front. I got it correct in video 6.1, but got it wrong here for some reason.
May I give my opinion?
The isomorphism between the quaternion basis and Clifford algebra basis is not unique. So I think it depends on the author’s preference
There could be several conventions as follows
1) Wikipedia
𝐢↔I𝜎_1=𝜎_1 𝜎_2 𝜎_3 𝜎_1=𝜎_1 𝜎_1 𝜎_2 𝜎_3=𝜎_2 𝜎_3=𝜎_23
j ↔I𝜎_2=𝜎_1 𝜎_2 𝜎_3 𝜎_2=𝜎_1 𝜎_2 (−𝜎_2)𝜎_3=−𝜎_1 𝜎_3=𝜎_31
k ↔I𝜎_3=𝜎_1 𝜎_2 𝜎_3 𝜎_3=𝜎_12
(I:pseudoscalar in 3D, ↔ represents the isomorphism)
2) Physics from Symmetry (Springer 2018, p.p 35)
i ↔I𝜎_2=𝜎_31, 𝐣↔I𝜎_1=𝜎_23, k↔I𝜎_3=𝜎_12
3) By Jean Gallier (Linear algebra for Computer Vision,….Vol 1, Chapter 15)
i ↔I𝜎_3=𝜎_12, j ↔I𝜎_2=𝜎_31, k↔I𝜎_1=𝜎_23
4) Hamilton convention - Most common (My preference) : The same as the previous video as you mentioned
𝐢↔−I𝜎_1=𝜎_32, j ↔−I𝜎_2=𝜎_13, k ↔−I𝜎_3=𝜎_21
5) Shuster convention
𝐢↔I𝜎_3=𝜎_12, j ↔−I𝜎_2=𝜎_13, k ↔I𝜎_1=𝜎_23
Therefore the video 11 follows the Wikipedia convention and the previous one follows the Hamilton convention for the isomorphism and multiplication rule. It does not make any problem if it follows the consistency in the context. I think that the Clifford algebra is the most powerful and beautiful mathematical language in physics.
I really love the video series. I respect Eigenchris.
Thanks for a great and enjoyable first introduction to Clifford Algebras for me. Related to your point, I was wondering why quaternions aren't CL(0,3) as i^2=j^2=k^2=-1 ? Perhaps @superek1308's remark relates to this, but I would need to understand more about the isomorphisms.
@@DeclanMBrennan The issue with CL(0,3) is that i*j is not equal to k. i*j would give you the bivector ij, and k would be its own separate symbol. The reason that even grade members of CL(3,0) gives us quaternions is that the product of 2 bivectors gives us another bivector, so we can get i*j=k.
@@eigenchris Thank you very much. That clears things up for me.
There is also typo at 8:49: w=u^v -> w = u+v
Chris, this was by far the best explanation of Clifford Algebra I've come across. Sudgylacmoe is also sensational, but your systematic treatment filled so many gaps in my understanding. Thank you so much!
this channel is a complete banger
feels even more impressive when a physicist gives you mathematical insight
You are such a talented Physicist, better than any university course.
POG POG POG POG!!!
eigenchris clifford algebra just dropped!!! let's fucking goooooooooooooooooo
You know what, I jumped straight into this video without going from video 1. I had to do a lot of reading to catch up. I’ll go back and learn those videos and come back to this one.
this is the episode i have looked most forward to
Maxwell's famous equations were not four but twenty. But the introduction of the vector operator nabla let us reduce them to just four. It is nice to see they are but aspects of just one, and it makes much sense: one law for one major part of Physics, electromagnetism. I'd be glad to learn which those for the strong and weak forces are.
I've been trying to make sense of all these concepts for years. This video does the job in half an hour. Congratulations!
This is great! I haven't seen an explaination of the abstract tensor quotient rule with this clarity
Michael Penn's tensor product video is a perfect compliment cause he gives good motivation. but the intuition didn't click for me until this vid of eigenchris. :-)
Your videos have changed how I see physics and have helped me learn exterior topics because I have this as a basis. I can't say how much I appreciate the effort you put out here.
Brilliant, elegant and generous, as you always do. Thank you very Very much !
Another outstandingly clear and understandable production!
awesome content dude! You are so good at breaking things down into clear and small steps!
greatly appreciate your effort to decompose a concept this advanced step-by-step
Simply the most useful playlist on this difficult topic that I have found in the last few years. Congratulations and thank you for your work. I do look forward to tracking the upcoming episodes to the very end!
Definitely one of your better explanations! You are getting really good at this…please keep it up. I’m learning a lot thanks
what a clear and accecible lecture on clifford algebra!!
brilliant. Best video ever introducing Clifford algebras.
This is wonderfully systematic and clear. Thanks!
Great stuff. Great series.
Amazing video! Thanks a lot for the effort!
these videos are so, so, so good
Your best video ever! Thank you much!
Liked at first
Now will watch
Thanks for your work
It helps a lot (A LOT)
I was waiting for this one
Excited to be getting to this! I've been reading Doran and Lasenby's text on this stuff, but the specific applications to quantum mechanics has still left me a bit mystified. Your videos are so clear, I'm sure it'll help me grok the physics better
Hey, I recognise that tensor algebra, that's a Fock space (for bosons). This is making my QFT lectures make a lot more sense in retrospect.
its really amazing
thank you for this video
@eigenchris please also make the other videos on spinors . these were very helpful so far
If you like Clifford algebras, you'll like Grassmann.jl my computer algebra implementation for the Julia language
Very impressive explanation!
I have to say that sometimes I really regret not continuing on in grad school in mathematics, since there look to be some really neat things in algebra once you get beyond the basics from undergrad abstract algebra. I wish I had gone straight into those kinds of things rather than be stuck with the few courses offered by the grad school I managed to get into, and became totally uninterested in continuing.
I think Geometric Algebra is pretty obscure/niche, even in grad school. At this point it has a small but growing number of people who are obsessed with it, but it's not a widely-taught topic.
oh this just occurred to me, related to your previous series on general relativity:
parallel transport of a vector can be used to determine the curvature of spacetime. What would happen if we perform parallel transport of *bivector* along a closed path in curved spacetime? (think of it as doing repetitive angular momentum measurements along a closed path in curved spacetime)
if this works, what result would we get? how can the two vectors (of a bivector) evolve while being parallel transported?
I think the result is the "boring answer": it's the same as if you just transported a pair of vectors. You could take the wedge product of the vectors after transporting the individual vectors, and the result is what you'd get if you parallel transported their bivector.
I'm confused about where the episode number 10 is
When I went to college and I learned there was more than one algebra I got scared
Excellent video series Chris, watching these videos is bringing together a number of strands of geometric algebra and tensors for me. And I wanted to understand spinors. Just a few questions: (1) Does the analysis you give apply to forms in exactly the same way? (2) Does the analysis also apply to mixed tensors in the same way? (3) You use a 2D tensor space and apply the special rules (with two orthonormal basis vectors), and decompose this into the 2D Grassmann Algebra and the 2D Cl(2,0) Algebra. So, do these same rules work the same way for higher dimensional space and spacetime, or are the rules more complicated?
For 1 and 2, I'm not sure what you mean by "analysis". Which parts are you asking about specifically for forms and tensors?
For 3, the "recipe" I use with the quotients work in any dimension. The only thing that matters is the "squared length" formula for the basis vectors in the Clifford algebra construction. In the simplest case, all the basis vectors will square to +1, but you can have -1s and 0s mixed in as well in the more general case.
Thank-you for your reply Chris - I will rephrase and try to be more specific. The abstract definition of Clifford Algebras, starting at 26:44, applies the special rules to the ex and ey bases with lowered indices. There is no mention of bases with upper indices or mixed upper and lower indices. Nor is there mention of three bases (as in Cl(3,0) or four bases (as in Cl(1,3)). Are these special rules only applicable to Cl(2,0) with basis vectors, or are they generally applicable to all the Clifford Algebras, and algebras involving dual-basis vectors and mixed basis/dual basis vectors. @@eigenchris
Nice
Great video, you explained everything perfectly. Something I want to see in a future video: when studying the Dirac equation, we can consider the massless case and find that the 4-dimensional Dirac spinor decomposes into the 2-dimensional left-handed and right-handed Weyl spinors. I've always disliked this step because it seems to essentially rely on seeing the Dirac spinors as 4-dimensional objects. How does this decomposition work, purely in terms of the abstract Clifford symbols?
Also, looking at the section where you decompose the Clifford product into a scalar product and a wedge product, it might be cool to add that the cross product can be defined as a wedge product with the pseudoscalar pulled out.
The explanation certainly isnt perfect, there's some minor mistakes in there, even if the presentation is nice looking. For example wedge(a,b) in 3D is not the cross product, it's the complement of the cross product (plane vs vector difference), so the video is not perfect and has mistakes.
@@CrucialFlowResearch I don't consider it a mistake, as he doesn't state what is the cross product of two vectors anytime in the video, and in fact, it doesn't matter for the purpose here, though the unique scenario where we have a vector output is with the cross product of two 3D vectors, the generic operation is called Hodge Star, something brought by differential forms, also unified under CA umbrela.
@@CrucialFlowResearch At 10:30 I'm pretty careful to say it's the COMPONENTS that are the same, not the objects themselves. I realize the wedge product produces a bivector and the cross product produces a vector.
@@CrucialFlowResearchhe pretty clearly said it was the COMPONENTS that were the same…
@@eigenchris actually, the components are not equal, the cross product depends on the metric, so the cross product components could differ from the wedge product, the cross product is a pseudobivector not a bivector
Wow - this is just such a wonderful video series. But... where is that next video? 🙂
In progress. It's 40min long, so it's taking me longer than usual.
@@eigenchris I'll definitely keep my eye on your channel; thanks again!
Wonderful videos, I am very grateful for you hard work. However I think there is a mistake in 32:17, let me know if I am wrong. I think that given the tensor algebra the Clifford algebra is the quotient by the ideal generated by the elements of the form $v \otimes u + u \otimes v -2g(u,v)$. With you ideal I cannot see how to obtain the anticommutativity
If you take (u+v)⊗(u+v) = u⊗u + u⊗v + v⊗u + v⊗v and apply the quotient rule, we get:
g(u+v,u+v) = g(u,u) u⊗v + v⊗u + g(v,v),
and since g(u+v,u+v) = g(u,u) + g(u,v) + g(v,u) + g(v,v), this gives us:
u⊗v + v⊗u = g(u,v) + g(v,u) = 2 g(u,v).
The anti-commutivity only applies if two vectors u and v are orthogonal, i.e. g(u,v) = 0. In this special case we get:
u⊗v + v⊗u = 0
or equivalently, the anti-commutative property:
u⊗v = - v⊗u
@@eigenchrisThank you very much!!
Why do we use CL(1,3) or CL(3,1) for spacetime algebra ?
Are they equivalent ?
Their even sub-algebras are equivalent (the elements with 0, 2, or 4 vectors). If you look at the odd parts as well, they are not equivalent. However, only the even parts are used in the Spin group for rotations and boosts, so either Cl(1,3) or Cl(3,1) is acceptable to use.
I feel like I’m missing something.
You say that the complex numbers are a Clifford algebra, but also that Clifford Algebras are anti-commutative. However, the complex numbers are commutative. Does something else anti-commute?
I'm saying that the vectors/symbols in a Clifford Algebra anti-commute with each other. Since complex numbers only have the one symbol "i", there's nothing else for it to anti-commute with. "i" commutes with itself, and it also commutes with the scalar "1" since scalars commute with everything.
Great video! Can't wait for more in the series. Is there a notion of a tensor quotient algebra with the ideal being v \otimes v - ||v||^2 for the complex case (replacing g(v,v) with ||v||^2 only matters in modules over C or H)? That is: is there an algebra that is to complex inner products as the Clifford algebra is to non-degenerate symmetric bilinear forms?
Also: I have heard it said that Cl(1,3) can be represented by M(2,H). Matrices over the quaternions aren't usually taught, so it would be interesting to see this isomorphism explicitly.
Can't wait for the next video in the series!
I don't think the complex case changes things too much. I believe you can take the quotient using any field of scalars. You technically only need a quadratic form, not an inner product. But the quadratic form can be used to define an inner product. I'm less familiar with modules though.
As I said, every Clifford Algebra has an infinite number of matrix representations. These matrices can have entries that are members of other Clifford Algebras (e.g. sigma matrices have complex entries, which is Cl(0,1)). I could believe that Cl(1,3) has a quaternion representation, but I haven't looked into it. I know it's easy to write the gammas using 2x2 matrices where the entries are from Cl(3,0) (aka the sigma matrices). You could probably change the sigmas into quaternions somehow. Not sure if that would involve the complex i or not.
@@eigenchris Its interesting that you identified M(2,\C) with M(2, Cl(0,1)). Its obviously true, but thinking of matrix representations of Clifford algebras valued in other Clifford algebras is a really clever proposition. I believe "Matrix Gateway to Geometric Algebra, Spacetime and Spinors" by Sobczyk has a section on matrices of Clifford algebras. So that may be worth some deeper digging.
I bring up M(2, \H) in the context of Cl(1,3) because it always felt strange to me that Spin(1,3) is SL(2,\C). Cl(1,3) is usually identified as a subalgebra of M(4,\C) based on how the basis are represented (Dirac, Chiral, or Majorana). In that context it would make sense for Spin(1,3) to be naturally represented by a subgroup of GL(4, \C). The fact that this group is SL(2,\C) always to me indicated that Cl(1,3) ought to have a more natural representation in terms of 2 by 2 matrices: hence M(2, \H). But the expression of Cl(1,3) in terms of M(2,\H) is almost never given. Which feels like a loss to me because it might make SL(2,\C) more intuitive.
As a side note, if you complexity Cl(1,3) to be \Cl(1,3) the Dirac Algebra then you could use 2 by 2 biquaternionic \B = \C \otimes_{\R} \H matrices to represent this algebra. I wrote a package for Python to help me in calculations for biquaternionic matrices for a research project in undergrad, but it never really went anywhere.
Thanks for answering in detail, its really appreciated :). Love the videos!
Also: I can't think of anywhere else to write this, but have you put any thought into how conformal geometry fits into the Clifford algebra picture? Its on my mind recently and I can't find a text that gives an answer I feel is satisfying.
@@eigenchris If you have a quadratic form for a complex vector space then you can Clifford it, but I think if you have an hermitian inner product / hermitian form instead (to guarantee that Q(v,v) for all v is real by conjugating one of the inputs), I'm not sure if that can be a clifford algebra. That would violate normal symmetry of the inputs.
I saw a wikipedia article about (minimal?) matrix representations for Clifford algebras. (I think it was called classification of Clifford Algebras?) There's some sort of mod 8 behavior with what the entries of the matrices have to be (real, imaginary, or quaternionic), and of course, the matrices increase in size as well.
do the sigma symbols in clifford algebra anticommute with themselves? does ai*ai = - ai*ai ? idk if that makes any sense. i'm guessing the square just gets rid of the negative
A symbol "anti-commuting with itself" would mean you get zero. This is what happens with the wedge product. In Clifford Algebras, a symbol doesn't "anti-commute with itself". Instead it gives the symbol/vector's squared length.
8:47 minor thing but isn't the thing you're doing here to Take w = u + v. rather than w = u ^ v as indicated in the box?
Yeah that was a bad typo on my part.
Is it me or we was episode 10 skipped? Because I can't seem to find it anywhere
I'm working on video 10 now. I made a community post about how I was flipping the order of 10 and 11.
Reaaally great video ! I've always wondered if a number system could be find to model special relativity in 2+1 space (easier to visualize than 3+1 ^^) ; will Cl(2,1) do the trick ? (with + - - signature) ? And if so what will be the associated matrices ? Thanks :)
Cl(1,2) will probably also work. They have the same even subalgeba (so the same spinors), but odd elements could behave slightly differently.
I assume Cl(1,2) for + - - or Cl(2,1) for - + + would do the trick. That would allow you to do 2 boosts and 1 rotation. You can just re-use the "Dirac"/"gamma" matrices, but just eliminate the Z one.
@@eigenchris What do you mean by eliminating the Z one ? Keeping 4x4 matrices with some of the 1 replaced by 0 or 3x3 matrices by removing the good raw and column ?
For example for g1 = [[0 0 0 1] [0 0 1 0] [0 -1 0 0] [-1 0 0 0]] you I tried [[0 0 0 1] [0 0 1 0] [0 -1 0 0] [0 0 0 0]] and [[0 0 1] [0 1 0] [-1 0 0]] or [[0 0 1] [0 -1 0] [-1 0 0]] and none of them squares to minus identity...
But thanks for the answer and all the incredible content !!!
11:36
It's not a diamond; in the limit, it's a bell curve.
It's only diamond shaped in 0-3 dimensions.
I've seen the repeated tensor product written as a power where the exponent gets marked with an (x) to stress it is a repeated tensor product, not a Cartesian product.
For illustration, if V has n dimensions, the Cartesian product V x V has 2n dimensions (it's pairs of vectors) but the tensor product V (x) V has n² dimensions (it's like matrices), so using V² for both of these different things is confusing
Yeah, you're right. I think I messed up and should have used the tensor symbol. What I wrote is more like the Cartesian product.
So can we view wedge products simply as determinants? Also, would have been nice at the end in the abstract discussion to say if there's an abstract way of defining the connection between Grassman algebras and Clifford algebras.
Determinants give oriented areas, and the compnents of wedge products can be thought of as the oriented areas of a set of vectors. So they are very closely related.
det(T(v)) = (T(e₁) ∧ T(e₂) ∧ ... ∧ T(eₙ)) / (e₁ ∧ e₂ ∧ ... ∧ eₙ)
Personally, I consider the right hand side of that easier to compute than the left hand side. The determinant of a matrix requires memorizing complicated rules that either only apply to specific sizes, or require recursing, but in _just_ the right way. The wedge product of a sequence of vectors just require remembering how orthogonal vectors anti-commute and parallel (and identical) vectors cancel each other out. The wedge product is also more general since it doesn't care about the overall size of your vector space; if there are more dimensions than vectors you're wedging together, then the result won't be a pseudoscalar, but is that really an issue?
Aren't the results of wedge products basically anti-symmetric tensors? So e.g. bivectors are isomorphic to rank-2 anti-symmetric tensors, right?
@@seneca983 Yes, that's correct. The "quotient" procedure I do at the end basically takes the set of tensors and forces them to be anti-symmetric.
@@eigenchris Thanks.
things i noticed:
1. for angular momentum, its inversion under parity is important right? and angular momentum is a (axial) vector with three components in 3D space, i don't think its obvious from bivector notation that u∧v can also be thought of as a vector, being added to another vector, etc
2. for the pauli matrices (19:24), the trivector is i times the scalar, and the bivector is i times the vector if we match the columns. interesting....
and a question:
if two bivectors are equivalent as long as their area and orientation are the same, what if we want to split the bivector into its vector components? will there be no unique way to do this?
ex: angular momentum is defined (for one particle) to be the cross-product between linear momentum vector and the position vector (from the center of rotation). We can "split" angular momentum into those two vectors, both have physical meaning. would this be lost if there are multiple way to form a bivector with the same direction and area? especially the circle bivector example, which looks like infinitesimal vectors glued together head-to-tail by wedge products
Both of these are actually incredibly good observations, and are absolutely not just coincidences! If you start looking further into exterior and clifford algebras, you’ll come across an operation called the “hodge dual”.
For instance 1, the hodge dual maps vectors to its perpendicular bivector using the right hand rule (there is no escape).
And for instance 2, in that specific matrix representation of Cl(3,0), taking the hodge dual of a matrix amounts to simply multiplying by i.
Splitting a bivector into its vector components is literally factoring, and yes, any given bivector has multiple pairs of vectors that can be used to make it. If however you have 1 of the two original bivectors, you can recover a line of potential vectors that formed the bivector using a form of division. If you have the vectors' dot product as well, such as when using the geometric product instead of the wedge product, then you can divide scalar plus bivector result by one of the input vectors to get the other input vector.
After all, how could you recover the displacement vector from the fulcrum from angular momentum when you can literally spin around said angular momentum bivector to completely change the displacement vector without changing the angular momentum?
Regarding the things you noted, you managed to discover _duality!_ The pauli matrices form an orthonormal basis of Cl(3), so the highest grade object is the trivector, which in Cl(3) squares to -1. This highest grade object is often called the pseudoscalar and written as i or I, for totally no apparent reason. ;) Multiplying I by a multivector (as long as I doesn't square to 0) is one way to define a multivector's dual. For Cl(3), this turns scalars into trivectors and vice versa, and turns vectors into bivectors and vice versa. In general, the dual of a basis multivector has every basis vector that the input did _not_ have.
The angular momentum as a vector is the dual of angular momentum as a bivector, and vice versa up to a sign. I did say "one way to define a multivector's dual." While the dual of a basis multivector is always orthogonal to it, their are naturally 2 such basis multivectors that are orthogonal, so it's actually ambiguous which is more correct. In most cases, the dual is often undone shortly after applying it, so it _usually_ doesn't really matter what form you use, but it's still good to keep in mind.
1) Yes, "axial" vectors are really bivectors in disguise. In my opinion, bivectors are much more intuitive to deal with.
2) Again, yes. The complex "i" can be completely replaced with the Clifford Algebra trivector (often written a capital "I") in this case.
You can look into the "hodge star operator", as others have suggested. It flips between the two different representations.
Final question: there are an infinite number of ways to decompose any given bivector. You example, you could re-write "ex ∧ ey" using a new basis "eu = (ex + ey)/sqrt(2)" and "ev = (-ex + ey)/sqrt(2)", which is the same basis, just rotate 45 degrees. If you like, try writing "eu ∧ ev" and make the substitutions above, then simplify. You'll find you get "ex ∧ ey" back again.
@@tylerfusco7495 maps vector into its perpendicular bivector?
hmmm lets say i have a vector of length R.
the vector pierces a "current loop" at a right angle.
if the area within the "current loop" is also R and the direction of "current" and the vector follows right hand-rule
then they are dual to each other?
something like this?
thinking about this "duality" thing, it seems like every axial/bivector is associated with a loop and its area. Kind of makes sense since spin is one of the most important examples, and its measured through magnetic moment = directional current x area
lol i'm less mathematically minded than you guys are
9 -> 6.1 -> 11 ?
Some serious advanced math going on here. (I made a community post saying video #10 is coming next.)
Thank you Prof. Chris. This is a pedagogical achievement.
is there a typo in the video title? because i can't find part 10.
I left a community post saying part 10 will be posted next.
@@eigenchristhanks!
I have to ask as no one else has - what happened to video #10?
It's still in progress. I finished this one first, so I uploaded first. I left a community note about it, but I guess that only reaches a limited number of people.
Umm... I don't understand how to find the dot product of, say, a vector and a bivector.
The dot + wedge definition only works for vectors. For bivector and up, you do the clifford product by just distrubuting the basis elements over each other, and setting the squares to +1 and -1 as needed.
@@eigenchris Sure, but i'm interested in the foundations, not application. On decomposable elements there seems to be a determinant formula. Perhaps it has meaning in your nice geometric understanding.
is Wedge product same as outerproduct?
No, an outer product is something else
Geometric Algebra sources often call the wedge product the outer product (or exterior product). In non GA sources, outer product is usually the tensor product.
@@jasonwilkes9383 Depends what you mean by "outer product". The key proeprty of the wedge product is that it's anti-commutative. The outer product of columns and rows doesn't have this property.
@@eigenchris Of course! Didn't mean to suggest otherwise. Fantastic video by the way. :)
@@jasonwilkes9383 Oh, shoot, I meant to reply to the original top comment by adrian, not yours. Whoops.
nice video
19:17 minor error, should be \sigma_{yz}=-i and negative for all the rest because ijk needs to equal -1
Agh. I got it right in 6.1. No idea why I removed the negative signs. I'll leave a pinned comment about this.
And this is why it's often easy to write quaternions as bivectors rather than i, j, and k. Each basis quaternion is the product of two of the basis vectors of Cl(3), though which two is highly dependent on context and personal taste. Where as there's only 1 way to write the quaternions (not counting the scalars), there are 48 different ways to write the bivector basis, several of which satisfy the defining equations for the quaternions, while several others don't, but they're all trivially convertable into each other since all the information you need is explicitly stated.
The basis given in the video is one of the most common, being a cyclic ordering where each consists of the basis vectors other than the one in their respective position. Reversing their order is also common when listing the full multivector basis in a line so that the dual operation reflects said line across its center.
Another common order is lexicographic order: x̂ŷ, x̂ẑ, and ŷẑ, which is often used due to higher dimensional Clifford Algebras are much harder to find cyclic orderings for and it's easier to generate algorithmically.
The most common basis for representing the quaternions from what I've seen is ŷẑ, x̂ẑ, x̂ŷ, which is the lexicographic ordering reversed and the cyclic ordering with ẑx̂ flipped to x̂ẑ. This preserves the dual property due to i, j, and k often being associated with x̂, ŷ, and ẑ respectively, and satisfies the quaternion equations, but having the one flipped basis could be unsatisfying for some.
The most _sinister_ basis I've seen is ẑŷ, x̂ẑ, and ŷx̂, flipping all 3 parts of the cyclic ordering rather than just one, which makes it feel more symmetrical than the previous while also satisfying the quaternion equations. It's sinister however because it's left-handed (which is literally what sinister originally meant).
No matter which you are given, you are completely free to shuffle around the components and flip each basis bivector at the cost of a minus sign and nothing about the algebra will even notice, so you're completely free to use whatever convention you like. The biggest problem is just which do you want to use when specifically converting to i, j, and k.
@@angeldude101 well all quaternion based multiplication is left handed no matter which isomorphism you use so I agree yeah I agree that it’s better to write them as a scalar+bivector directly
@@person1082 Funny you say that since most sources seem to claim that quaternions are right-handed, even though GA kind of forces the quaternion basis to be left-handed in order to conform to the defining equations.
How sinister of them!
Is episode 10 missing???
I'm working on it now. I left a community post saying they will come out of order.
where did the grade come out from suddenly? You didn't define grade :D
Have you considered going on patreon
I have a Ko-fi tip jar linked in the description.
Clifford algebra of spacetime have real matrix representation, metric signature is (-,+,+,+):
γ0
0 1 0 0
-1 0 0 0
0 0 0 1
0 0 -1 0
γ1
0 -1 0 0
-1 0 0 0
0 0 0 1
0 0 1 0
γ2
0 0 0 1
0 0 1 0
0 1 0 0
1 0 0 0
γ3
1 0 0 0
0 -1 0 0
0 0 1 0
0 0 0 -1
Last video was number 9, this is 11, did I miss a video?
check his community tab, he said he wanted to do that one later since he had this one done first, and wanted to get into the definitions of clifford and grassman algebras
GRASSMANN has two Ns
LMFAO
Chris, PLEASE - HE'S NOT THE SAME GUY AS OUR DEALER XD
Oops. I'll keep that in mind for the next videos. Thanks for pointing that out.
I haven't seen any evidence that Grassmann WASN'T a dealer.
Hermann Graßmann actually was a highschool teacher. And he switched to linguistics because noone took his maths seriously.
Then he found lasting recognition in Vedic studies - so he really didn't need drugs to be high!
btw. Gras as in marijuana is spelt with one simple s in German, i.e. Graßmann - der Grasmann.
@@franks.6547 KEK
yep, he DEFINITELY (positive definite -ly) was a dealer, then.
Viva Las Vedas!
Writing -u^v without parentheses should establish (-u)^v = -(u^v) first
Where is 10 ?
I left a community post about how I'm skipping it for now.
More flim-flam please
ah.. at last...
Yay no politics!
Spinors for Beginners 13 will be on gender spinors.
@@eigenchris Just what i've been hoping for, I cant wait until we modify the tensor product to derive the infinite set of genders!
P.S I really appreciate your videos, and I mean no hostility by my comments.
@@eigenchris but, a multivector has only two orientations. You can't deny the mathematics.
@@jamescook5617What about a linear combination of several multivectors though.
I don't think vv = ||v||2. For vectors, I think it's v•v. The magnitude shouldn't include a bivector, after all.
The bivector part goes to zero, because the wedge product of a vector with itself goes to zero.
@@eigenchris oh yeah. I guess I forgot about that somehow. Thanks for answering a stupid question.
nice, but I would appreciate even more if you kept consistent with "zero", "null vector", "null bivector", instead of using "zero" for most times
Zero is the cross roads of any mathematical object.
Null (multi)vectors = null tensors = null spinors = zero scalar
I've seen a lot of sources says "zero vector" (as in the vector that obeys 0 + v = v). In relativty, "null vector" has a different meaning: it's a vector whose length is zero. This is different than the zero vector, as any non-zero vector that points in the direction of a light beam will have zero length.
what would you do with mixed grade objects
@eigenchris, I like to see the equivalent "group space", where zero is the identity for any group in the "group space."
@@eigenchris I'm not english, so in my words "null vector" and "zero vector" meant the same. I was refering at 5:32, where the sum of two opposite bivectors should be the zero bivector, not the scalar zero. But as a comment above argued, it may not be important since we're gonna sum different grades in the end.
3:08 I can't believe how dull and artificial you are making Clifford Algebra's out to be! They are they very essence of Physics. At least on Rigel 6.
im ngl this is why people think GA people can be annoying at times… just because he didnt present it in the way you wanted it (and actually stuck with the style of his channel) doesnt mean he did anything wrong 😐
@@tylerfusco7495 No offence indented, and I never suggested anything wrong, perhaps a little more excitement is all. A lot gets lost in literal translation of comments. Maybe my first sentence ought to have been: Hey Man Clifford Algebra's are way more interesting in their own right? And I was refering to how we see things on Rigel 6 btw :-)