Fascinating, especially the last result which I hadn't seen before. I find that an easy way to grasp quaternionic behavior under exponentiation is to see that any single quaternion (other than a purely real one) defines a complex sub-algebra of the quaternions and that sub-algebra behaves the same way as the complex numbers; its purely imaginary part, unitized, plays the same role as the complex number i. A lot of unintuitive stuff got much simpler when I understood that.
The reason we use H for the quarternions is only secondarily because the letter H stands for Hamilton. Firstly: we can't use the now obvs is Q because that's already taken. This is, of course, a rational reason.
Well, ofc you wouldn't think that because it's BY DEFINITION. It's one of the things that make quaternions... quaternions! 😉 Just like i^2 = i*i = -1 is by definition what makes a complex number complex.
At 12:38 that seems to be to be a huge leap of faith. I expect that if you, say, expanded e^x to a series, stuck I in it, and simplified, I can see you could get the stated result, but saying it is so just because I^2=-1 seems like an awful stretch.
Exponentiation of a matrix is just applying the series representation of the exponential to the matrix. It is so amazing to see the concept of exponentials generalize this way... Same for the logarithm. Thank you for making the video!!
Yes. My favorite example of generalized exponentiation is the math of squeezed light, such as what LIGO uses. It's the sinh and cosh of the sum/ difference of the creation and annihilation operators, the quantum mechanical operators that mean "add a particle" and "remove one". Since sinh and cosh are just sums of exponentials, we are exponentiating to the power of "add a particle" etc and it actually works!
Fun video. BTW, the matrix representation of the quaternions is related to the groups SU(2) and its covering group SO(3), and thus they can be used to represent rotations in three dimensions.
Whenever I see a video about complex numbers I automatically wonder about quaternions. Whenever there’s quaternions I automatically wonder about geometric algebras. Fun fact about Euclidean (Clifford) geometric algebra is that the basis vectors square to 1. Products of non-parallel vectors create bivectors. Unit bivectors square to -1 so they act like i in Euler’s formula. In fact, the scalars plus bivectors form a sub algebra that’s equivalent to the quaternions!
@@Apollorion clifford/geometric algebra works independent of the dimension of the vector space that you feed it. It just requires a vector space *and* the inner product structure you feed it (i.e. if some of the vectors square to -1, or even 0). Even a generalization of the cross product called the 'wedge product' works in a geometric algebra generated by a vector space of more than 3 dimensions. It comes down to the way the cross working based off what the orthogonal complement *is* of the vector you're feeding it (and the cross product fails, because it demands to output just a vector while there are more than 1 orthogonal vectors now to a given vector in 4D. The wedge product which works on multivectors doesn't have this limitation) EDIT: One more thing I want to add is a lot of people will mention wedge-products as not having a visualization to them. I would be wary of statements like these, they can be represented as the plane spanned by those two vectors you're 'wedging' (but this plane, or circle, or whatever also has an orientation to it. So e1 ^ e2 has the opposite orientation to e2 ^ e1). They may have very good algebraic explanations, but not the geometric explanation handy, and that's totally ok! (the vice-versa also can happen, and you should be wary of that too when/if you run into it).
for the exercise at 5:27, writing q for a + bi + cj + dk and q*/|q|^2 for (a - bi - cj - dk)/(a^2 + b^2 + c^2 + d^2), one needs to check _both_ q(q*/|q|^2) = 1 and (q*/|q|^2)q = 1, since quaternions are noncommutative - we have |q| ≠ 0 if and only if q ≠ 0 so q*/|q|^2 gives the inverse of any nonzero quaternion, and one can clear the denominators in the checks, meaning it is equivalent to check that qq* = |q|^2 = q*q.
You do not really need to look at both if you find that one of them is |q|². If you found that qq*=|q|², you could note that q*q=(qq*)*=(|q|²)*=|q|², and say that you found the answer since |q|² is a real number. If you would find that q*q≠|q|², this would be a non-issue since therefore q^(-1)≠q*/|q|² as noted above.
when I learned that Pauli matrices have such a similar Euler's identity I was really surprised. But if you add the identity you get exactly the quaternion algebra, so knowing that, the quaternion exponentiation makes perfect sense
regarding the end, there: it feels as though we'd need to define a separate left-sided exponentiation to account for the lack of commutativity, at a bare minimum
Fun video. Lord Kelvin declared that quaternions, “though beautifully ingenious, have been an unmixed evil to those who have touched them in any way, including Clerk Maxwell” 👿
My first introduction to quaternions was glancing through an applied electrical engineering text. Fascinating something so theoretical is so essential to describe electrical behavior.
The unit quaternion algebra is isomorphic to su(2), which is the Lie algebra that describes the angular momentum of fermionic particles (in other words, the so-called 1/2 spin of the elementary matter particles such as the electron, quark, and neutrino). And the "vector algebra" (dot product and cross product) which is used to write the classical laws of EM were historically extracted from the quaternion algebra to begin with. It is indeed fascinating.
I was introduced to quaternions by the video done by 3Blue1Brown... and they keep getting stranger and more interesting with every new thing I learn about them. The exponential transformations exposed in this video are an especially brain-bending result.
I'd suggest looking into the clifford/geometric algebra way of looking at quaternions next. They'll certainly get more interesting, but less strange (an algebra on planes, and there are three planes in 3D space: xy, yz, zx. Oh btw, i, j, k, are basically the normal vectors to these guys if you wanted to think of it that way), and though I like 99.9999% of 3blue1brown's vids, I don't agree with the framing of quaternions as inherently "4-dimensional creatures" (and the clifford/geometric algebra view on this clears this up, also clears up what exponentials mean when you're putting in different objects in there). While at the same time, if you *want* 4D things, you can look at the clifford/geometric algebra generated by a 4D vector space: 'space-time algebra'.
@@monadic_monastic69 Unless the number of videos produced by 3blue1brown is a multiple of a million, the percentage of them you like cannot be 99.9999.
As a physicist I feel like you should discuss the connection to SU(2). It's quite simple to show the version of Euler's formula in that context as well.
At the 19:42 mark, shouldn't the lower left "i" be "-i" ? In other words, shouldn't the lower left of your matrix "-c+di actually be -(c+di) = -c-di. When you do this, then you get the matrix [0 c+di], [-c-di 0] and when you let c=0 you get [0 i], [-i 0] = k. I can see this clearly when I use the 4x4 matrix implementation of quaternions directly which I show in my video: @FractalWoman "Demystifying Sir William Rowan Hamilton's Quaternions" ruclips.net/video/vXyNA0ORYfA/видео.htmlsi=62Eao0gdh7O5LPRp
I think the contradiction showed in the short video can also be seen here. when you take i^j=(e^{i*pi/2})^j pulling j inside of the exponential isn't well defined I think, would it necessarily be multiplication from the left or should it be from the right? because of the commutation relations i and j have it can be either e^{k*pi/2} or e^{-k*pi/2}.
I love the way you present the ideas. Engaging and clear. You asked about the red-brown chalk once. It is a bit hard to see, for example at 2:06. around "example calculations". But these are quibbles. It works for boxes and lines, and the rest of the boards are clear to follow.
Some years ago I calculated a general formula for a quaternion raised to a quaternion power. Quite messy and arduos, but was fun and the result is quite neat, a scaled rotation along the axis of both original quations followed by a rotation about the axis of their cross product. (Everytime I say "the axis of" I mean the axis defined by the purely imaginary part, and the cross product being within the imaginary parts also) Edit: of course, this was all abusing notation and glossing over the sketchiness of using exponentials on a non-commutative algebra, but I was on my first year of college so chill I didn't have the knowledge to take that much into consideration xD
Consistency with the order of the imaginary and its coefficient might be a good idea, especially when you start getting into quaternions. I mean, if you try to go the route of complexifying a complex number, (a + bi) + (c + di)j = a + bi + cj + dk is fine but (a + ib) + j(c + id) = a + ib + jc - kd is slightly problematic. You could do (a + ib) + j(c - id) = a + ib + jc + kd, of course, which is a nice callback to the particular complex matrix form you use; though I prefer a different form. Anyway, the signatures are pretty arbitrary here (as sign is, generally) so long as you are consistent, throughout. I'd just stick with trailing imaginaries, personally, but then you run into a formatting issue with Euler's formula. If you want to use the leading i in Euler's and the conjugate imaginary component in the matrix it's probably best to stick with leading everywhere to avoid problems.
The Pauli matrices are generators of the Lie group of the unitary 2x2 matrices with determinant +1, i.e. SU(2). The key relation is U = exp(b sigma_x + c sigma_y + d sigma_z) in which sigma_x = [0 1] [1 0] sigma_y = [0 -i] [i 0] sigma_z = [1 0] [0 -1] (i.e. the Pauli matrices) b, c and d are real coefficients U is an arbitrary member of SU(2) The relation to quaternions is by putting sigma_x = i, sigma_y = j, sigma_z = k U = q, an arbitrary quaternion of modulus 1 Michael's result for a=0 is q = exp(bi + cj + dk) = cos(B) + I sin(B) with I = (bi + cj + dk)/B, B = sqrt(b^2+c^2+d^2)
This is true of matrix exponentiation as well-it gives rise to two kinds of exponentiation-left and right-which for commutative multiplication are the same. What is less obvious to me is what the principal branch of the quaternion logarithm should be. If you have the Log of -1, which quaternion do you use? Any purely imaginary quaternion with modulus 1 will work.
Editor note: @5:26 - don't use red chalk for text, just outline. Can't read "exercise". Also, it seems there is a mild glare in the center of your board. Change to position lights or camera to not glare.
Because multiplication is not commutative it is anti commutative that is just (iπ/2)j that is ijπ/2 but if we were to write jiπ/2 we need to add a -ve signed as well so the exponent can be written as -jiπ/2
@rehanchopdar617 I'd like to ask for clarification on why the multiplication of the exponents would lead only for i to be multiplied by j and not j to be multiplied by i I understand that the Hamiltonians aren't commutative. I am just asking why the exponents would multipy by that order.
@@alnfsyh9403 how do you interpreted (a^m)^n. I would say a^(m×n) like I said multiplication is anti commutative m×n=-n×m there are lots of ways to prove the exponent and multiplication anti commutativity, you can search them up
13:45 ... My heart sank as I expected you to say " and this would be a nice place to stop" I watched to the end and to be honest, didn't need the matrix stuff, I was already knocked out by the beauty of it all. I dare some elerctrickery wiz will find some use for 'i' and 'k' in their modelling.
I came up with a better, simpler and easier to understand implementation of Euler's formula in quaternion form that can be implemented directly into a computer program to do rotations about an arbitrary axis in 3D space. @FractalWoman "Demystifying Sir William Rowan Hamilton's Quaternions" ruclips.net/video/vXyNA0ORYfA/видео.htmlsi=62Eao0gdh7O5LPRp A link to the computer code is available in the description.
i suggest you start with q=a+b1*i+b2*j+b3*k and b=sqrt(b1^2+b2^2+b3^2) then q=a+b*Î with î the specific normalized quarternion here with |î|=1 and î=b1/b*i+b2/b*j+b3/b*k, then e^q becomes very simular to the complex case: q=a+b*î and e^q=e^a*(cos(b)+sin(b)*î) which notation seems much more alike that in the complex situation. Right?
Plotting the unit circle we see that the form given in this video is related to the identity cos squared plus sin squared = 1 by multiplying by the complex conjugate
Recall at 18:20, why is a+b*i+c*j+d*k equal to this matrix? The Rest is clear, you can prepare i by setting a=c=d=0 and b=1, j by setting a=b=d=0 and c=1, using exponent rules, matrix multiplication along with A^B=(e^(logA))^B to get i^j=k
Michael shouldn't have used an equals sign, he should have used something like a ⇿ (I mean a two-headed arrow, like ←→ as one symbol, but when I put the default two-headed arrow ↔ it comes out as an emoji XD)
Is this an example of the exponential map from a Lie algebra to its Lie group? The group being the multiplicative group of quaternions of magnitude 1 (so-called versors, isomorphic to SU(2)) and the algebra being the infinitesimal displacements from the identity element, q=1. I'd be grateful for a reply though I'm not sure if I'd understand it -- despite Michael's efforts, Lie algebras still turn my brain to jelly!
How much of Complex analysis can be extended to the Quaternions? I have often wondered about this. Do you have any sources or references? Using i as quaternion and complex in same equation is VERY confusing.
interesting that the complex numbers can't just be treated as a plane within the quarternions, but I guess that kind of makes sense the same way that imaginary numbers can't be treated like a number line within the complex numbers
The ending is a bit concerning, as you mentioned. We can still define exp(a+BꞮ) = e^a(cos(B)+Ɪsin(B)). From here, let's define the modulus of a quaternion q = a+bi+cj+dk as |q| = sqrt(a^2+b^2+c^2+d^2). Then, letting q = a+bi+cj+dk = a+BꞮ as above, we can define a quaternionic logarithm as log(q) = ln|q| + Ɪarccos(a/|q|), for some choice of arccos(a/|q|) Then we can check that exp(log(q)) = q for all quaternions q. Next, given two quaternions p and q, the next question is how to define q^p. The sneaky part here is that we have two reasonable options: exp(log(q)*p) or exp(p*log(q)). Note that these two do _not_ have to be equal to each other since quaternion multiplication is noncommutative. My proposal, then, is to define two versions of exponentiation - right exponentiation and left exponentiation. At the risk of confusing with tetration, we use the following notations: qᵖ = "q right exponentiated by p" = exp(log(q)*p) ᵖq = "q left exponentiated by p" = exp(p*log(q)) So in your example of i raised to the j, we have to consider a few things: For q = i, we have a = 0, B = 1, Ɪ = i, |q| = 1, so log(i) = ln(1)+i*arccos(0) = iπ/2, let's say. iʲ = "i right exponentiated by j" = exp(log(i)*j) = exp(ijπ/2) = exp(kπ/2) = cos(π/2)+k*sin(π/2) = k ʲi = "i left exponentiated by j" = exp(j*log(i)) = exp(jiπ/2) = exp(−kπ/2) = cos(−π/2)+k*sin(−π/2) = −k This explains the discrepancy in your short where you talk about this. Some interesting things to point out here: if r is a positive real number, then log(r) = ln|r|, which is a real number and commutes with all quaternions. So for any quaternion p and positive real number r, we have rᵖ = ᵖr (i.e., right and left exponentiation of r by q produces the same result). So we haven't introduced an incompatibility between e^p and exp(p) (using a good choice of argument).
As a bit of an addendum, in general, exp(p+q) will not be equal to exp(p)*exp(q) since quaternions don't commute. And certainly p^(q+s) will not be equal to p^q * p^s for quaternions p, q, and s (for either left or right exponentiation). As a bit of fun, we can even consider i¹⁺ʲ = exp(log(i)*(1+j)) = exp(iπ/2*(1+j)) = exp(iπ/2+kπ/2) = cos(π/√2) + i*√(2)sin(π/√2)/π + k*√(2)sin(π/√2)/π whereas i¹*iʲ = i*k = −j So these are very different results.
Hi Michael, there is a major simplification when applying the Power Rule for Exponents to the quaternions, you said exp(i * pi /2) ^ j is exp(i * j * pi / 2) yet I think j being on the right side of i is not so obvious. Why is this the case? Since normally this rule was applied to commutative numbers, now with quaternions there must be a special explanation of this specific order. Edit: Probably this is related to the problem you mention in the short video.
I wonder if we can use the properties of the exponential to write exp(a+bi+cj+dk) = exp(a) exp(bi) exp(cj) exp(dk) = exp(a)(cos b + i sin b)(cos c + j sin c)(cos d + k sin d) ... but this looks very _sketchy_, given that i, j and k do NOT commute. Probably the right way to go about this path is to write the exp as a series and take proper care of the commutation relations.
You're right in that it's sketchy. In general, the power laws only actually apply when multiplication is commutative. You can pull out the scalar part since it commutes with the rest, but you can't split up the imaginary part.
When teaching the addition of many disparate components, I like to underline the components as I deal with them. This establishes a bookkeeping methodology for the students (and me!) that prevents losing track of terms in a long list.
I have to say that I also was very suspicious of the exponential power equation. In effect, you want to find a way that given an analytic function of two complex variables f(x,y), you can extend it to a non-commutative algebra f(A,B). I think that the problems you found show that this can't be done in a unique way. For example with matrices, the power law exp(A+B) = exp(A)exp(B) is not true. So you cannot simply assume that works in the commutative case works in the non-commutative case.
Quaternions, octonions, tetrads, tensors, twistors and vectors are devices for physical description of movement and variation of quantities. This is STRICT computer graphics. Hamilton among many other mathematicians are geniuses.
This is quite an intuitive reconstruction of Euler’s formula. But after watching, I’m left to wonder: what about infinite tetrations of quaternions? Are there any that converge to a real and/or transcendental number? And what about the reciprocal of an infinite tetration of quaternions?
Several people say they find quaternions confusing. Let me as a physicist day how I get intuitive about them. Let me know (with likes or comments) how far you find this helpful. --->> Feel free to ignore me if you don't find this helpful; or if you are such a pure mathematician that you don't take input from those of us who should the pretty of your subject by actually applying it to anything else. Having met 3d Vector algebra before quaternions i find it helpful to think in terms of the scalar and vector products of two 3-vectors, together with the result of multiplying two scalars and the result of multiplying a scalar by a vector. So i know without stopping to think how each of these products work. I know that one of these is anticommutative (ie a×b = -b×a) and the other products commute. I'm already familiar with the letters i j k and how they multiply together. Here comes the only unexpected fact that whereas the dot product of i.i is unity on vector algebra, the quaternion product i i is negative That's my first and only surprise. Everything else is already familiar. Before i get to differential operators, sticking with numbers I imagine the new mathematical entity (r, v) where r is a real, and v is a three vector. Nomenclature: From relational algebra I'd call that a tuple on (R, V) but I would welcome better suggestions from you or math guys here. Then the product q1 q2 is nothing more than the product (r1, v1) (r2, v2) Remembering to calculate four possible cross terms, and to preserve order. That's necessary because we have a mix of communicative and anti commutative terms. Then it all works out from the existing multiplication rules, so long as I continue to remember that the real result of parallel components is the opposite to what would have expected from the dot product That inverted sign of the dot product really is the only tricky thing about them, in my opinion. And even that's obvs from complex numbers. I'm sufficiently familiar with each of those five products that I don't have to learn much that's new. that if we ignore the real parts of the inputs and the outputs then multiplication of quaternions is the same as that for vector cross products. Even the letters i j k have roles which are isomorphic if we ignore the real parts throughout. Next: then the real part of the result of quaternion multiplication is simply the product of the real parts MINUS the scalar product of the two vectors. That turns out to be very useful in special relativity. And, to jump ahead to really advanced stuff I'm already geared up to understand that div V in vector algebra will have a differential operator on the quarternion imaginary parts called conv V. Because of the different sign convention, one operator tells you how a field diverges from a point, and the other how that same field converges. I won't think, when i read Maxwell's original paper that he got his signs wrong, as many undergrad physicists do. Maxwell was working with the imaginary parts of quaternions, though he called them something else, so where we have div E in his equations be actually wrote conv E. (senior moment here: remind me please what the numbers are called when you project (r, V) onto my modified V -- I know I used to know a name for these numbers, which are simply the imaginary parts of the apartments here. This is unconventional number set because of course they are not closed under multiplication)
19:45 Not sketchy at all. In fact you just proved what I have established long ago: ALL QUATERNIONS ARE NATURAL LOGS AND MUST THEREFORE OBEY ALL THE RULES OF EXPONENTS. Thus by the power rule of exponents i^j=i*j=k. You can prove this using DeMoivre's formula. You can also prove it by using his [j^j=e^(-pi/2)]=j*j=-1 if the negative sign preceding pi is i^2 and you treat ^j and i^2 like exponents. In fact you would get e^(i*pi)=-1.
The problem here is that you lose the regular exponential rules since quaternion addition commutes. Suppose you have exp(i*pi/2)=i and exp(j*pi/2)=j then exp(i*pi/2)*exp(j*pi/2) != exp(j*pi/2)*exp(i*pi/2) since k != -k. But exp((i+j)*pi/2) = exp((j+i)*pi/2) so clearly exp(a)*exp(b) != exp(a+b). If I were to guess, i would suspect that the series definition of exp doesn’t always converge for all quaternion arguments.
Upon further investigation it seems like it does converge, but much like matrix exponentials you just don’t have exponent rules when your arguments don’t commute.
Isn't it necessary to prove that e^(BI) = cos B + I sin B, instead of just stating it as a fact, with the argument "because I behaves in a similar way as the complex number i"?
For pedagogical reasons I understand why you left out the multivaluedness of various results, but when it comes to H ≃ M_2(C) matrices being brought to powers of each other, that has A LOT of multivaluedness going on! Log comes with all integer multiples of 2πi times identity matrix.
how we can define quaternion j and k without any matrix ? IN case of imaginary number i , i = square root of -1 but in quaternoins they use matrix but matrix is not a operation like square root , matrix is use for telling dimension of any thing , matrix is not a operation .
Question: can you take the logarithm of a matrix by using the spectral decomposition? I've never seen that done but it seems to me that if you can use the spectral decomposition to exponentiate a matrix then it should also be possible to take a matrix and use the spectral decomposition to take its logarithm.
I would love to thank you a lot for the video. I have a question (extremely curious): Is that method applicable for the higher Hypercomplex Numbers such as Octonions and Sedenions or there will be some differences in the "B" and "I" construction?
Generally, every time you go up to a 'next level' of complexity, the algebra changes; in quaternions for instance, commutativity of components under multiplication is lost. I haven't played with Octonions yet, but expect it's a different game.
I think you should read the description.
whatthefuckishappening?
'-'
I did. I'm still processing it
😂😂😂😂🤣🤣🤣👏👏
I really liked this video because of all the math this dude did. What a guy.
Fascinating, especially the last result which I hadn't seen before.
I find that an easy way to grasp quaternionic behavior under exponentiation is to see that any single quaternion (other than a purely real one) defines a complex sub-algebra of the quaternions and that sub-algebra behaves the same way as the complex numbers; its purely imaginary part, unitized, plays the same role as the complex number i. A lot of unintuitive stuff got much simpler when I understood that.
What a gnarly way to start the week.
Thank you, professor.
The reason we use H for the quarternions is only secondarily because the letter H stands for Hamilton.
Firstly: we can't use the now obvs is Q because that's already taken.
This is, of course, a rational reason.
This is the first time quaternions made any sense to me, in spite of several attemps. As ususal presented in a clear no-nonsense way. Thanks a lot!❤
haha same this makes alot more sense then anything ive seen
God I love mathematics. It's so lovely to play with ideas in this way.
wow, never though that i^j would be equal to i*j=k, thats is cool. Nice video as always Michael! :D
Well, ofc you wouldn't think that because it's BY DEFINITION. It's one of the things that make quaternions... quaternions! 😉
Just like i^2 = i*i = -1 is by definition what makes a complex number complex.
@@mikeiavelli The definition says i*j=k, but it's not immediately obvious from the definition that i^j=k, at least not to me.
Wait what about i^^j
At 12:38 that seems to be to be a huge leap of faith. I expect that if you, say, expanded e^x to a series, stuck I in it, and simplified, I can see you could get the stated result, but saying it is so just because I^2=-1 seems like an awful stretch.
I am waiting to see the octonions video with the nonassocietivity hell 😂
Just wait for the sedenions or Bi-Sedenions!
Exponentiation of a matrix is just applying the series representation of the exponential to the matrix. It is so amazing to see the concept of exponentials generalize this way... Same for the logarithm. Thank you for making the video!!
Yes. My favorite example of generalized exponentiation is the math of squeezed light, such as what LIGO uses. It's the sinh and cosh of the sum/ difference of the creation and annihilation operators, the quantum mechanical operators that mean "add a particle" and "remove one". Since sinh and cosh are just sums of exponentials, we are exponentiating to the power of "add a particle" etc and it actually works!
@@Tehom1this sounds awesome, is there a RUclips resource you can point to??
@@scalex1882 I read this in a scientific paper and I'm afraid I don't know of a youtube video about it, sorry.
Fun video. BTW, the matrix representation of the quaternions is related to the groups SU(2) and its covering group SO(3), and thus they can be used to represent rotations in three dimensions.
Nice connection. Thanks!
I loved this! Would love to see this extended to the Clifford representation.
Whenever I see a video about complex numbers I automatically wonder about quaternions. Whenever there’s quaternions I automatically wonder about geometric algebras.
Fun fact about Euclidean (Clifford) geometric algebra is that the basis vectors square to 1. Products of non-parallel vectors create bivectors. Unit bivectors square to -1 so they act like i in Euler’s formula. In fact, the scalars plus bivectors form a sub algebra that’s equivalent to the quaternions!
@@zemoxian Even when the vectors have more than 3 dimensions?
if you feed vectors into it you get hyperbolic rotations
@@Apollorion clifford/geometric algebra works independent of the dimension of the vector space that you feed it. It just requires a vector space *and* the inner product structure you feed it (i.e. if some of the vectors square to -1, or even 0).
Even a generalization of the cross product called the 'wedge product' works in a geometric algebra generated by a vector space of more than 3 dimensions. It comes down to the way the cross working based off what the orthogonal complement *is* of the vector you're feeding it (and the cross product fails, because it demands to output just a vector while there are more than 1 orthogonal vectors now to a given vector in 4D. The wedge product which works on multivectors doesn't have this limitation)
EDIT: One more thing I want to add is a lot of people will mention wedge-products as not having a visualization to them. I would be wary of statements like these, they can be represented as the plane spanned by those two vectors you're 'wedging' (but this plane, or circle, or whatever also has an orientation to it. So e1 ^ e2 has the opposite orientation to e2 ^ e1). They may have very good algebraic explanations, but not the geometric explanation handy, and that's totally ok! (the vice-versa also can happen, and you should be wary of that too when/if you run into it).
@@monadic_monastic69 Thank you for your response. I realise I do not know enough about the clifford/geometric algebra yet.
The title for this video is quite clear and descriptive. The title card is also quite clear and descriptive. Thank you.
for the exercise at 5:27, writing q for a + bi + cj + dk and q*/|q|^2 for (a - bi - cj - dk)/(a^2 + b^2 + c^2 + d^2), one needs to check _both_ q(q*/|q|^2) = 1 and (q*/|q|^2)q = 1, since quaternions are noncommutative - we have |q| ≠ 0 if and only if q ≠ 0 so q*/|q|^2 gives the inverse of any nonzero quaternion, and one can clear the denominators in the checks, meaning it is equivalent to check that qq* = |q|^2 = q*q.
You do not really need to look at both if you find that one of them is |q|². If you found that qq*=|q|², you could note that q*q=(qq*)*=(|q|²)*=|q|², and say that you found the answer since |q|² is a real number.
If you would find that q*q≠|q|², this would be a non-issue since therefore q^(-1)≠q*/|q|² as noted above.
when I learned that Pauli matrices have such a similar Euler's identity I was really surprised. But if you add the identity you get exactly the quaternion algebra, so knowing that, the quaternion exponentiation makes perfect sense
regarding the end, there: it feels as though we'd need to define a separate left-sided exponentiation to account for the lack of commutativity, at a bare minimum
Fun video. Lord Kelvin declared that quaternions, “though beautifully ingenious, have been an unmixed evil to those who have touched them in any way, including Clerk Maxwell” 👿
Both of those formulae are special cases of the Clifford algebra formulation of 3D rotations :)
We're nudging closer to the Clifford Algebra and Geometric Algebra
My first introduction to quaternions was glancing through an applied electrical engineering text. Fascinating something so theoretical is so essential to describe electrical behavior.
The unit quaternion algebra is isomorphic to su(2), which is the Lie algebra that describes the angular momentum of fermionic particles (in other words, the so-called 1/2 spin of the elementary matter particles such as the electron, quark, and neutrino). And the "vector algebra" (dot product and cross product) which is used to write the classical laws of EM were historically extracted from the quaternion algebra to begin with. It is indeed fascinating.
I was introduced to quaternions by the video done by 3Blue1Brown... and they keep getting stranger and more interesting with every new thing I learn about them. The exponential transformations exposed in this video are an especially brain-bending result.
I'd suggest looking into the clifford/geometric algebra way of looking at quaternions next. They'll certainly get more interesting, but less strange (an algebra on planes, and there are three planes in 3D space: xy, yz, zx. Oh btw, i, j, k, are basically the normal vectors to these guys if you wanted to think of it that way), and though I like 99.9999% of 3blue1brown's vids, I don't agree with the framing of quaternions as inherently "4-dimensional creatures" (and the clifford/geometric algebra view on this clears this up, also clears up what exponentials mean when you're putting in different objects in there).
While at the same time, if you *want* 4D things, you can look at the clifford/geometric algebra generated by a 4D vector space: 'space-time algebra'.
@@monadic_monastic69 Unless the number of videos produced by 3blue1brown is a multiple of a million, the percentage of them you like cannot be 99.9999.
As a physicist I feel like you should discuss the connection to SU(2). It's quite simple to show the version of Euler's formula in that context as well.
Very interesting and beautiful results. Thanks for sharing.
I did not expect spinors to make a surprise appearance at the end. But you are spinning things so I guess I should've
At the 19:42 mark, shouldn't the lower left "i" be "-i" ? In other words, shouldn't the lower left of your matrix "-c+di actually be -(c+di) = -c-di. When you do this, then you get the matrix [0 c+di], [-c-di 0] and when you let c=0 you get [0 i], [-i 0] = k. I can see this clearly when I use the 4x4 matrix implementation of quaternions directly which I show in my video: @FractalWoman "Demystifying Sir William Rowan Hamilton's Quaternions" ruclips.net/video/vXyNA0ORYfA/видео.htmlsi=62Eao0gdh7O5LPRp
18:45 Why is it log(A)*B rather than e.g. B*log(A)? Isn't that rather arbitrary choice (unless the matrices commute)?
There's a Quarternion version? Damn, you learn something new everyday.
I think the contradiction showed in the short video can also be seen here.
when you take i^j=(e^{i*pi/2})^j pulling j inside of the exponential isn't well defined I think, would it necessarily be multiplication from the left or should it be from the right? because of the commutation relations i and j have it can be either e^{k*pi/2} or e^{-k*pi/2}.
also, as usual great video- I love those videos of yours where you push poke known math in unusual ways to see what comes out
I love the way you present the ideas. Engaging and clear.
You asked about the red-brown chalk once. It is a bit hard to see, for example at 2:06. around "example calculations". But these are quibbles. It works for boxes and lines, and the rest of the boards are clear to follow.
your feelings are irrational
This was a very interesting, fun presentation. Thank you professor))
Lovely video, as usual. Great channel to watch. Cheers.
THE DESCRIPTION LMAO
Some years ago I calculated a general formula for a quaternion raised to a quaternion power. Quite messy and arduos, but was fun and the result is quite neat, a scaled rotation along the axis of both original quations followed by a rotation about the axis of their cross product. (Everytime I say "the axis of" I mean the axis defined by the purely imaginary part, and the cross product being within the imaginary parts also)
Edit: of course, this was all abusing notation and glossing over the sketchiness of using exponentials on a non-commutative algebra, but I was on my first year of college so chill I didn't have the knowledge to take that much into consideration xD
In a way, e^{i\theta} is an abuse of notation.
@@cxpKSip notations are made to be abused stretched to the ultimate - that's how all ideas are tested and great ones are created. Keep it up!
1:09 The 'where' could/should include ijk = -1. Then the 6 relations below that follow from these 4 relations, which are easier to remember.
Consistency with the order of the imaginary and its coefficient might be a good idea, especially when you start getting into quaternions.
I mean, if you try to go the route of complexifying a complex number, (a + bi) + (c + di)j = a + bi + cj + dk is fine but (a + ib) + j(c + id) = a + ib + jc - kd is slightly problematic.
You could do (a + ib) + j(c - id) = a + ib + jc + kd, of course, which is a nice callback to the particular complex matrix form you use; though I prefer a different form.
Anyway, the signatures are pretty arbitrary here (as sign is, generally) so long as you are consistent, throughout.
I'd just stick with trailing imaginaries, personally, but then you run into a formatting issue with Euler's formula. If you want to use the leading i in Euler's and the conjugate imaginary component in the matrix it's probably best to stick with leading everywhere to avoid problems.
What are the connection between the quaternions and the Pauli matrices?
The Pauli matrices are generators of the Lie group of the unitary 2x2 matrices with determinant +1, i.e. SU(2). The key relation is
U = exp(b sigma_x + c sigma_y + d sigma_z)
in which
sigma_x = [0 1]
[1 0]
sigma_y = [0 -i]
[i 0]
sigma_z = [1 0]
[0 -1]
(i.e. the Pauli matrices)
b, c and d are real coefficients
U is an arbitrary member of SU(2)
The relation to quaternions is by putting
sigma_x = i, sigma_y = j, sigma_z = k
U = q, an arbitrary quaternion of modulus 1
Michael's result for a=0 is
q = exp(bi + cj + dk) = cos(B) + I sin(B)
with I = (bi + cj + dk)/B, B = sqrt(b^2+c^2+d^2)
I actually finally understand quaterions now.
i^j = ±k depending on how you apply rule for the exponential of an exponential (right vs left).
This is true of matrix exponentiation as well-it gives rise to two kinds of exponentiation-left and right-which for commutative multiplication are the same. What is less obvious to me is what the principal branch of the quaternion logarithm should be. If you have the Log of -1, which quaternion do you use? Any purely imaginary quaternion with modulus 1 will work.
Editor note: @5:26 - don't use red chalk for text, just outline. Can't read "exercise". Also, it seems there is a mild glare in the center of your board. Change to position lights or camera to not glare.
It is not clear to me why i^j should be k instead of -k. Why do we say that (e^(i*pi/2))^j is e^(ij*pi/2) instead of e^(ji*pi/2)?
Because multiplication is not commutative it is anti commutative that is just (iπ/2)j that is ijπ/2 but if we were to write jiπ/2 we need to add a -ve signed as well so the exponent can be written as -jiπ/2
@rehanchopdar617
I'd like to ask for clarification on why the multiplication of the exponents would lead only for i to be multiplied by j and not j to be multiplied by i
I understand that the Hamiltonians aren't commutative. I am just asking why the exponents would multipy by that order.
@@alnfsyh9403 how do you interpreted (a^m)^n. I would say a^(m×n) like I said multiplication is anti commutative m×n=-n×m there are lots of ways to prove the exponent and multiplication anti commutativity, you can search them up
13:45 ... My heart sank as I expected you to say " and this would be a nice place to stop" I watched to the end and to be honest, didn't need the matrix stuff, I was already knocked out by the beauty of it all. I dare some elerctrickery wiz will find some use for 'i' and 'k' in their modelling.
What is this video description lol
Great vid, I love seeing quaternions at the zoo like this. Wish I saw more in the wild...maybe.
8:46 almost fell over but kept the balance. well done professor
I came up with a better, simpler and easier to understand implementation of Euler's formula in quaternion form that can be implemented directly into a computer program to do rotations about an arbitrary axis in 3D space. @FractalWoman "Demystifying Sir William Rowan Hamilton's Quaternions" ruclips.net/video/vXyNA0ORYfA/видео.htmlsi=62Eao0gdh7O5LPRp A link to the computer code is available in the description.
Quaternions written as matrices were the theme of French most difficult math contest (X-ENS, Maths A) last week, to enter the best engineering school.
That description though lol
I really liked this video because of all the math this dude did. what a guy :)
Great explanation of a tough topic. Great Job!
thanks for your persistence
i suggest you start with q=a+b1*i+b2*j+b3*k and b=sqrt(b1^2+b2^2+b3^2) then q=a+b*Î with î the specific normalized quarternion here with |î|=1 and î=b1/b*i+b2/b*j+b3/b*k, then e^q becomes very simular to the complex case: q=a+b*î and e^q=e^a*(cos(b)+sin(b)*î) which notation seems much more alike that in the complex situation. Right?
I really loved the video it's very good . Thank you sir😊 we will support you
Plotting the unit circle we see that the form given in this video is related to the identity cos squared plus sin squared = 1 by multiplying by the complex conjugate
Recall at 18:20, why is a+b*i+c*j+d*k equal to this matrix?
The Rest is clear, you can prepare i by setting a=c=d=0 and b=1, j by setting a=b=d=0 and c=1, using exponent rules, matrix multiplication along with A^B=(e^(logA))^B to get i^j=k
It is not equal, it is just different representation called the complex matrix representation..
Michael shouldn't have used an equals sign, he should have used something like a ⇿ (I mean a two-headed arrow, like ←→ as one symbol, but when I put the default two-headed arrow ↔ it comes out as an emoji XD)
Professor Penn,
Why do I think you are a genius?
Is this an example of the exponential map from a Lie algebra to its Lie group? The group being the multiplicative group of quaternions of magnitude 1 (so-called versors, isomorphic to SU(2)) and the algebra being the infinitesimal displacements from the identity element, q=1.
I'd be grateful for a reply though I'm not sure if I'd understand it -- despite Michael's efforts, Lie algebras still turn my brain to jelly!
Yes, this is correct.
How much of Complex analysis can be extended to the Quaternions? I have often wondered about this.
Do you have any sources or references?
Using i as quaternion and complex in same equation is VERY confusing.
It's somewhat related to multivariate complex analysis. But I'm not an expert and I don't know how the loss of commutativity changes the analysis.
Not much, because they are non commutative. For example, the Foundamental Theorem of Algebra fails!
@@michaelaristidou2605 Thanks.
The analogue of complex analysis for quaternions is called quaternionic analysis.
@@schweinmachtbree1013 Is it good for anything?
interesting that the complex numbers can't just be treated as a plane within the quarternions, but I guess that kind of makes sense the same way that imaginary numbers can't be treated like a number line within the complex numbers
I bearly understand normal math and then there is this
The ending is a bit concerning, as you mentioned.
We can still define exp(a+BꞮ) = e^a(cos(B)+Ɪsin(B)). From here, let's define the modulus of a quaternion q = a+bi+cj+dk as |q| = sqrt(a^2+b^2+c^2+d^2).
Then, letting q = a+bi+cj+dk = a+BꞮ as above, we can define a quaternionic logarithm as
log(q) = ln|q| + Ɪarccos(a/|q|), for some choice of arccos(a/|q|)
Then we can check that exp(log(q)) = q for all quaternions q.
Next, given two quaternions p and q, the next question is how to define q^p. The sneaky part here is that we have two reasonable options: exp(log(q)*p) or exp(p*log(q)). Note that these two do _not_ have to be equal to each other since quaternion multiplication is noncommutative.
My proposal, then, is to define two versions of exponentiation - right exponentiation and left exponentiation. At the risk of confusing with tetration, we use the following notations:
qᵖ = "q right exponentiated by p" = exp(log(q)*p)
ᵖq = "q left exponentiated by p" = exp(p*log(q))
So in your example of i raised to the j, we have to consider a few things:
For q = i, we have a = 0, B = 1, Ɪ = i, |q| = 1, so log(i) = ln(1)+i*arccos(0) = iπ/2, let's say.
iʲ = "i right exponentiated by j" = exp(log(i)*j) = exp(ijπ/2) = exp(kπ/2) = cos(π/2)+k*sin(π/2) = k
ʲi = "i left exponentiated by j" = exp(j*log(i)) = exp(jiπ/2) = exp(−kπ/2) = cos(−π/2)+k*sin(−π/2) = −k
This explains the discrepancy in your short where you talk about this.
Some interesting things to point out here: if r is a positive real number, then log(r) = ln|r|, which is a real number and commutes with all quaternions. So for any quaternion p and positive real number r, we have rᵖ = ᵖr (i.e., right and left exponentiation of r by q produces the same result). So we haven't introduced an incompatibility between e^p and exp(p) (using a good choice of argument).
As a bit of an addendum, in general, exp(p+q) will not be equal to exp(p)*exp(q) since quaternions don't commute. And certainly p^(q+s) will not be equal to p^q * p^s for quaternions p, q, and s (for either left or right exponentiation).
As a bit of fun, we can even consider
i¹⁺ʲ = exp(log(i)*(1+j)) = exp(iπ/2*(1+j)) = exp(iπ/2+kπ/2) = cos(π/√2) + i*√(2)sin(π/√2)/π + k*√(2)sin(π/√2)/π
whereas i¹*iʲ = i*k = −j
So these are very different results.
Suddenly I feel thankful for Heaviside's version of Maxwell's equations
I really liked this video because of all the math this dude did. what a guy
17:13: as far as I know exponent rules for complex numbers are not the same as for real numbers
This is exactly what I needed!!
Quaternions' didn't click for me until I learned a little geometric algebra.
Hi Michael, there is a major simplification when applying the Power Rule for Exponents to the quaternions, you said exp(i * pi /2) ^ j is exp(i * j * pi / 2) yet I think j being on the right side of i is not so obvious. Why is this the case? Since normally this rule was applied to commutative numbers, now with quaternions there must be a special explanation of this specific order.
Edit: Probably this is related to the problem you mention in the short video.
I wonder if we can use the properties of the exponential to write
exp(a+bi+cj+dk) = exp(a) exp(bi) exp(cj) exp(dk) = exp(a)(cos b + i sin b)(cos c + j sin c)(cos d + k sin d)
... but this looks very _sketchy_, given that i, j and k do NOT commute. Probably the right way to go about this path is to write the exp as a series and take proper care of the commutation relations.
You're right in that it's sketchy. In general, the power laws only actually apply when multiplication is commutative. You can pull out the scalar part since it commutes with the rest, but you can't split up the imaginary part.
It is definitely a wrong proof/derivation. The rigors generalized way to deal with something like that is using representation theory
More Quaternions!
When teaching the addition of many disparate components, I like to underline the components as I deal with them.
This establishes a bookkeeping methodology for the students (and me!) that prevents losing track of terms in a long list.
The quatirion field post in exam MathA 2023 in France is beautiful exam to more understanding the quaternions field specialy her matrix represtation
Does the entierty of the boards after 13:48 belong to :
e^q = e ^a ( ....
and there is no closing parenthesis.
13:41 this is in fact the quaternionic version of this formula
Nice can you do maxwells equations next?!
I guess the description is a mathematical secret code
I have to say that I also was very suspicious of the exponential power equation. In effect, you want to find a way that given an analytic function of two complex variables f(x,y), you can extend it to a non-commutative algebra f(A,B). I think that the problems you found show that this can't be done in a unique way.
For example with matrices, the power law exp(A+B) = exp(A)exp(B) is not true. So you cannot simply assume that works in the commutative case works in the non-commutative case.
You can likewise show that j^i = 1/k, which is pretty cool.
19:04 shouldn't it be -1 in left corner?
edit: yep, 20 seconds later it should 🤗
Quaternions, octonions, tetrads, tensors, twistors and vectors are devices for physical description of movement and variation of quantities.
This is STRICT computer graphics. Hamilton among many other mathematicians are geniuses.
Shouldn't the modulus of the quaternion around 5:30 be under a square root???
This is quite an intuitive reconstruction of Euler’s formula. But after watching, I’m left to wonder: what about infinite tetrations of quaternions? Are there any that converge to a real and/or transcendental number? And what about the reciprocal of an infinite tetration of quaternions?
Really interesting, thanks!
What’s the short video that shows why this is sketchy?
It’ll be out tomorrow. On RUclips Shorts.
-Stephanie
MP editor
@@MichaelPennMath looking forward!
@@MichaelPennMath So satisfying to find the answer to the question you were about to ask :)
Several people say they find quaternions confusing. Let me as a physicist day how I get intuitive about them. Let me know (with likes or comments) how far you find this helpful.
--->> Feel free to ignore me if you don't find this helpful; or if you are such a pure mathematician that you don't take input from those of us who should the pretty of your subject by actually applying it to anything else.
Having met 3d Vector algebra before quaternions i find it helpful to think in terms of the scalar and vector products of two 3-vectors, together with the result of multiplying two scalars and the result of multiplying a scalar by a vector. So i know without stopping to think how each of these products work. I know that one of these is anticommutative (ie a×b = -b×a) and the other products commute.
I'm already familiar with the letters i j k and how they multiply together.
Here comes the only unexpected fact that whereas the dot product of i.i is unity on vector algebra, the quaternion product i i is negative That's my first and only surprise. Everything else is already familiar.
Before i get to differential operators, sticking with numbers I imagine the new mathematical entity (r, v) where r is a real, and v is a three vector.
Nomenclature:
From relational algebra I'd call that a tuple on (R, V) but I would welcome better suggestions from you or math guys here.
Then the product q1 q2 is nothing more than the product
(r1, v1) (r2, v2)
Remembering to calculate four possible cross terms, and to preserve order. That's necessary because we have a mix of communicative and anti commutative terms.
Then it all works out from the existing multiplication rules, so long as I continue to remember that the real result of parallel components is the opposite to what would have expected from the dot product
That inverted sign of the dot product really is the only tricky thing about them, in my opinion. And even that's obvs from complex numbers.
I'm sufficiently familiar with each of those five products that I don't have to learn much that's new.
that if we ignore the real parts of the inputs and the outputs then multiplication of quaternions is the same as that for vector cross products. Even the letters i j k have roles which are isomorphic if we ignore the real parts throughout.
Next: then the real part of the result of quaternion multiplication is simply the product of the real parts MINUS the scalar product of the two vectors. That turns out to be very useful in special relativity.
And, to jump ahead to really advanced stuff
I'm already geared up to understand that div V in vector algebra will have a differential operator on the quarternion imaginary parts called conv V. Because of the different sign convention, one operator tells you how a field diverges from a point, and the other how that same field converges.
I won't think, when i read Maxwell's original paper that he got his signs wrong, as many undergrad physicists do. Maxwell was working with the imaginary parts of quaternions, though he called them something else, so where we have div E in his equations be actually wrote conv E.
(senior moment here: remind me please what the numbers are called when you project (r, V) onto my modified V -- I know I used to know a name for these numbers, which are simply the imaginary parts of the apartments here. This is unconventional number set because of course they are not closed under multiplication)
Any chance of getting a link to the video you're talking about that explains why this definition is sketchy? :)
it'll be posted tomorrow :)
-Stephanie
MP Editor
I don't know what to do about the description of this video
Seems like people are very interested in algebraic structures
19:45 Not sketchy at all. In fact you just proved what I have established long ago:
ALL QUATERNIONS ARE NATURAL LOGS AND MUST THEREFORE OBEY ALL THE RULES OF EXPONENTS.
Thus by the power rule of exponents i^j=i*j=k.
You can prove this using DeMoivre's formula. You can also prove it by using his [j^j=e^(-pi/2)]=j*j=-1 if the negative sign preceding pi is i^2 and you treat ^j and i^2 like exponents. In fact you would get e^(i*pi)=-1.
The problem here is that you lose the regular exponential rules since quaternion addition commutes. Suppose you have exp(i*pi/2)=i and exp(j*pi/2)=j then exp(i*pi/2)*exp(j*pi/2) != exp(j*pi/2)*exp(i*pi/2) since k != -k. But exp((i+j)*pi/2) = exp((j+i)*pi/2) so clearly exp(a)*exp(b) != exp(a+b). If I were to guess, i would suspect that the series definition of exp doesn’t always converge for all quaternion arguments.
Upon further investigation it seems like it does converge, but much like matrix exponentials you just don’t have exponent rules when your arguments don’t commute.
Isn't it necessary to prove that e^(BI) = cos B + I sin B, instead of just stating it as a fact, with the argument "because I behaves in a similar way as the complex number i"?
You only need that I^2 = -1 which is easily verified. After that the series expansions work in exactly the same way as with e^(Bi)
@@pwmiles56 OK, that makes sense.
interesting ! thank you very much
Well, it doesn't commute, I was waiting for that...
It's linear transformations, I use those tools a lot for 3D cg
This comes from Clifford algebra, or as it's known these days, geometric algebra.
I dont know why he said i^i is such as famous video concept. That has to be one of the easiest math results to arrive at that I can thing of...
For pedagogical reasons I understand why you left out the multivaluedness of various results, but when it comes to H ≃ M_2(C) matrices being brought to powers of each other, that has A LOT of multivaluedness going on! Log comes with all integer multiples of 2πi times identity matrix.
Wonderful video but I haven't understood the last step, could you please explain or prove it .
Wonderful! Thanks so much 😮
Wait so I know i = e^i * pi/2 and j and k but what aboit a general quaternion like i+2j+k? What is that in exponential form?
how we can define quaternion j and k without any matrix ?
IN case of imaginary number i , i = square root of -1 but in quaternoins they use matrix but matrix is not a operation like square root , matrix is use for telling dimension of any thing , matrix is not a operation .
Question: can you take the logarithm of a matrix by using the spectral decomposition? I've never seen that done but it seems to me that if you can use the spectral decomposition to exponentiate a matrix then it should also be possible to take a matrix and use the spectral decomposition to take its logarithm.
I really like this video this guy is awesome CHALK CHALK CHALK BLACK BOARD
I would love to thank you a lot for the video.
I have a question (extremely curious):
Is that method applicable for the higher Hypercomplex Numbers such as Octonions and Sedenions or there will be some differences in the "B" and "I" construction?
Generally, every time you go up to a 'next level' of complexity, the algebra changes; in quaternions for instance, commutativity of components under multiplication is lost. I haven't played with Octonions yet, but expect it's a different game.