**Geometric Algebra**
Have you ever heard about the concept of geometric algebra? I was introduced into it by the excellent [Foundations of Game Engine Development](https://foundationsofgameenginedev.com/) by [Eric Lengyel](http://terathon.com/lengyel/). From that moment on I have been searching the internets for good resources on that and I've always found them sparse, hard to get in, a combination of both or others.
Nevertheless I have learned a lot about the subject.
The next step whould be to write my own guide into it. The first challenge would be to decide where to start, and what knowledge I take for granted for my readers. Quite a daunting task! This guide will not be perfect, and I can only ask for any reader to give me feedback (find me on twitter) if they please. I will start very mathematicaly, because I like that mindset, but I want to eventually get into some useful bits.
What will you be able to get from this? The most direct advantage is to **understand** quaternions. But I will promise an incursion into 5-dimensional non-euclidean world, where we will come back with an understanding of dual quaternions. We can even go to dodecaternions one day!
Sorry I haven't introduced myself. I am Isaac 'Atridas' Serrano i Guasch. And this is [my webpage](atridas87.cat).
# What is a vector?
We have to start somewhere. And there are two elements that geometric algebra is build upon: scalars and vectors. I'm gonna assume you know everything we need from scalars. Those are your old **numbers**. Mathematically any *field* can be used. I will limit myself to the Real ($\Real$) numbers. We don't need to go weird here[^complex].
Vectors are a different beast. They are arrows. Or things that behave like arrows anyway. We want vectors to have three defining characteristics:
+ **An attitude**: You can think of this as a direction. Except that it is somewhat missleading, because of...
+ **An orientation**: Two vectors can have the same attitude, but oposite orientation. Direction whould be the combination of attitude *and* orientation.
+ **A weight**: How strong is the vector. Sometimes this means length. Sometimes it is speed. Sometimes it is strength. Depends on what are we representing with a vector (a position, a speed, a force?).
********************************************************************************
* *
* ^ +--------> <-----*-----> *
* / Opposite orientation *
* / +-----> *
* / Same attitude, *
* *------> different weight *
* Two vectors with *
* different attitude *
********************************************************************************
We want vectors to represent concepts that are related to space and geometry. We have mentioned it eary but the
main applications of them are to represent positions, velocities, accelerations, forces and directions. In general
any *one dimencional* concept embedded in an *any dimencional* space is a candidate to be represented with
a vector.
# Addition
We are talking about algebra, so you knew this was coming. In order to have an algebra we need some operations.
Turns out GA has two basic operations and about a thousand (just kidding) of derived operations.
I'm getting ahead of myself.
Oh, some symbology first. So we understand each other.
* $\Real$ the real numbers
* $a \in b$ a belongs to b
* $\forall$ a$ for all a
* $\exists$ a$ exists a
* $\nexists$ a$ does not exist a
* $a \implies b$ a implies b
* $a \iff b$ a implies b, b implies a (if and only if).
* $\mathcal{G}$ all elements of the geometric algebra
* $\mathcal{G}_0$ the scalars of the geometric algebra
* $\mathcal{G}_1$ the vectors of the geometric algebra
Anyway, the first operation we need is the addition. This is the easy one, and I'm going to give you some rules for it
that you should be familiar with:
1. $\forall A, B \in \mathcal{G} \implies A + B \in \mathcal{G}$
2. $\forall A, B, C \in \mathcal{G} \implies (A + B) + C = A + (B + C)$
3. $\forall A, B \in \mathcal{G} \implies A + b = b + A$
4. $\forall A \in \mathcal{G} \implies \exists 0 \in \mathcal{G} : A + 0 = A$
5. $\forall A \in \mathcal{G} \implies \exists B \in \mathcal{G} : A + B = 0$
And now, with words
1. The addition of two members of the geometric algebra is a member of the geometric algebra
2. Addition is associative
3. Addition is conmutative
4. Exists an element "zero" (wich for all intents and purposes, is the zero you should already be familiar with) that is an identity with respect to addition.
5. Each element has an oposite that when added results in zero (so we can trivialy define "substraction").
This is easy to understand with scalars (you should already be familiar with them. If you are not why are you reading this?), but it gets tricky when you try to add vectors.
To add vectors we use a technique known as "put one vector after the other, then draw from the beginning of the first to the end of the second". This trick only works[^headtail] if you interpret the weight of the vector as its length.
****************************************************
* a a *
* ^ *---> b *---> *
* a+b /| *-------> <---* *
* / |b *------------> b *
* / | a+b 0 *
* *-->+ a+b *
* a *
****************************************************
There is another think that the rules of addition lets us do: adding a vector and a scalar together. For instance "3 + x" (where x is a known vector, not an unknown variable). This is not something we will usualy do but it is something we will find from time to time. The result of "3 + x" is... "3 + x". It can't be simplified. They don't really mix. It works as the complex numbers, if you are familiar with them ("3 + i" is "3 + i" and it has meaning in itself).
In general the addition of two scalars gives a scalar. And two vectors gives a vector. Thats one of the reasons the subset of scalars is called $\mathcal{G}_0$ and the vectors $\mathcal{G}_1$, because they are "closed" under addition (there are more reasons, I'll come back to this later).
Ok, let's go into more interesting things.
# The Geometric Product
This is where the magic happens. If you haven't worked with geometric algebra, this will be new to you. It may be confusing at first. But everything comes from here, and even though most resources I've found do not start from here, but from more familiar places (the outer and inner products are the usual starting points) I've decided to start here because this is the actual foundation of everything.
Oh, by the way. The symbol for the geometric product is the *no symbol at all*. Yes, it is a bit confusing, but "a b" is the geometric product of a and b. The space is sometimes omited, so "ab" will be the same thing. I did not decide this one.
So, the rules:
1. $\forall A, B, C \in \mathcal{G} \implies A (B + C) = A B + A C$
2. $\forall A, B, C \in \mathcal{G} \implies (A + B) C = A C + B C$
3. $\forall A, B, C \in \mathcal{G} \implies A (B C) = (A B) C$
4. $\forall A \in \mathcal{G} \implies 1 A = A$
5. $\forall A \in \mathcal{G} \implies 0 A = 0$
6. $\forall A \in \mathcal{G}, \forall \alpha \in G_0 \implies \alpha A = A \alpha$
7. $\forall x \in \mathcal{G}_1 \implies x x \in \mathcal{G}_0$
The first two rules tells us that multiplication is distributive over the addition. The third tells us that it is associative. 4 and 5 tells us how 1 and 0 work with multiplication. All of that should be more or less familiar. And it is useful. It's the 6th rule that starts to be interesting. Not because of what it says, but because of what it does not say.
The geometric product is conmutative **if one of the factors is a scalar**. When no factor is a scalar it may be, it may be anticonmutative, or it may be neither. Be *very* careful because this is a great source of mistakes. Believe me.
And the last rule. This is the magic. This is where *everything* happens. The square of a vector is a scalar. This leads to the first definition.
$x^2 = ||x||^2$, where $||x||$ is the weight of $x$
We can start to play ideas. If $x$ is a vector, what whould happen if we added it to itself? $x + x = [product 4] = 1 x + 1 x = [product 2] = (1 + 1) x = 2 x$
What whould the weight of $2x$ be? $||2x||^2 = 2x 2x = [product 6] = (2 * 2)x x = 4 ||x||^2$
It kind of makes sense, right?
Now that we can multiply scalars and vectors we can revisit the properties of vectors so they can be better understood.
We say two vectors share the same orientation if $y = \alpha x$ (x and y are the vectors, $\alpha$ is a scalar). If $\alpha$ is negative, then they will have oposite orientations. If $\nexists \alpha \in \mathcal{G}_0 \implies y = \alpha x$ then they have different orientations.
Now, let's do some fun stuff (I'll stop telling you what rules I'm using, but I'll try to go slowly. If I think a step is not obvious, I'll put the rule again). Here we will use vectors $x$ and $y$ (I'll try to be consistent and use x, y and z as vectors from now on)
$(x + y)^2 = (x + y)(x + y) = x(x + y) + y(x + y) = xx + xy + yx + yy = x^2 + y^2 + xy + yx$
Ok, we have that $(x + y)^2 = x^2 + y^2 + xy + yx$. We know that $(x + y)^2$, $x^2$ and $b^2$ are scalars, so $xy + yx$ has to be a scalar too. Huh?
This think is so important that we will give it a name and a proper definition:
The inner product between two vectors is an operation $\cdot : \mathcal{G}_1 \times \mathcal{G}_1 \to \mathcal{G}_0$ and it is defined $x \cdot y = \frac{1}{2}(xy + yx), \forall x, y \in \mathcal{G}_1$
And this brings to another definition: Two vectors are orthogonal if their inner product is zero[^ortho].
Orthogonality is an important concept. It means that the two vectors have nothing in common. Imagine you want to project one vector onto another. If they are orthogonal, the projected vector will be zero.
**********************************************************
* ^ ^ ^ *
* | / \ *
* | / | | \ *
* | / | | \ *
* *-----> *---+-> +---*------> *
* Orthogonal Non-orthogonal Also non-orthogonal *
* vectors vectors vectors *
**********************************************************
Imagine we have two vectors: $x$ and $a$. We could decompose a with the help of another vector, $y$ that is orthogonal to $x$ ($x \cdot y = 0$) and two scalars $\alpha$ and $\beta$: $a = \alpha x + \beta y$. Having done this decomposition, we can make the following operation:
************************************
* *
* beta y a *
* ^------ ^ *
* | / *
* | / | *
* / | *
* ^ / | *
* | / | *
* y | / | *
* |/ | *
* *---> +-> *
* x alpha x *
* *
************************************
$x a = x (\alpha x + \beta y) = \alpha x^2 + \alpha \beta xy$
$a x = (\alpha x + \beta y) x = \alpha x^2 + \alpha \beta yx$
$x \cdot a = \frac{1}{2} (x a + a x) = \frac{1}{2} (\alpha x^2 + \alpha \beta xy + \alpha x^2 + \alpha \beta yx) = \alpha x^2 + \frac{1}{2} (\alpha \beta xy + \alpha \beta yx) = \alpha x^2 + \alpha \beta \frac{1}{2} (xy + yx) = \alpha x^2 + \alpha \beta (x \cdot y) = \alpha x^2 + \alpha \beta 0 = \alpha x^2$
In a sense, the inner product gives us a sense on how "similar" two vectors are.
But there is a product far more interesting than the inner product (and I'll come back to that later): the outer product.
# The Outer Product
The outer product between two vectors is an operation $\wedge : \mathcal{G}_1 \times \mathcal{G}_1 \to \mathcal{G}_2$ and it is defined $x \wedge y = \frac{1}{2}(xy - yx), \forall x, y \in \mathcal{G}_1$
This is where the familiar unfamiliar starts. If you know about vector algebra, you will start to notice that some formulas are familiar to you, but the geometric interpretation we gave them here will be slightly different. Probably you know those formulas by root, but here I'll try to show you why they are the way they are.
Let's ignore for a moment the $\mathcal{G}_2$ thing. Don't worry, we'll get there. The outer product, or "wedge" product[^wedgeCat] has some interesting properties we can prove from its definition:
* Antisimetric: $a \wedge b = \frac{1}{2}(ab - ba) = \frac{1}{2}(-ba + ab) = \frac{1}{2}((-ba) - (-ab)) = -\frac{1}{2}(ba - ab) = -b \wedge a$
* Scaling: $a \wedge (\alpha b) = (\alpha a) \wedge b = \alpha (a \wedge b)$
* Distributive: $a \wedge (b + c) = a \wedge b + a \wedge c$
Those nice properties have some nice consequences:
* The wedge product of a vector by itself (or a multiple of itself) is zero: $a \wedge a = -($a \wedge a) = 0$
* If we were to add a multiple of one vector to the other factor, the result whould be unmodified: $a \wedge (b + \alpha a) = a \wedge b + a \wedge (\alpha a) = a \wedge b + \alpha a \wedge a = a \wedge b + \alpha 0 = a \wedge b$
* If we scale both vectors by the same factor, the result is scaled by said factor squared: $(\alpha a) \wedge (\alpha b) = \alpha^2 (a \wedge b)$
* Previous point means that once you have wedged two vectors you can not come back to the original vectors. Even though you had one of them there will be an infinite amount of vectors that whould give you the same result.
All this results point hint that this new element we got as a result from the outer product can be interpreted as a kind of *oriented area*.
***************************************************************
* *
* b + alpha a *
* ^ -----+ ^ -----+ ^------ ^ ------+ *
* | | | | | / / *
* | a^b | | b^a | | / a^b / *
* | ^ | | | | | / ^ / *
* b | | | b | | | b| / | / *
* | --' | | <--' | | / ---' / *
* | | | | | / / *
* | | |/ / *
* *-------> *-------> *-------> *
* a a a *
* *
***************************************************************
So. If we interpret that object as an oriented area, we may as well give it a name. The name it has is "bivector", because it's like a vector, but with two dimensions. How much "like a vector" is it, though?
* It has a **weight**. It is proportional to the weight of the vectors we used to build it. It is roughtly equivalent to an area.
* It has an **attitude**: There is only one attitude in 2D, but if we where to make bivectors with vectors that live in three dimensions, we whould be able to span those with a lot more freedom.
* It has an **orientation**. As we had seen, $a \wedge b = -b \wedge a$, so $a \wedge b$ and $b \wedge a$ share attitude and weight, but have opposite orientations.
Remember $\mathcal{G}_2$? $\mathcal{G}_2$ means "the collection of all bivectors in my geometric albegra". Essentially everything you can build by wedging two (and only two) vectors.
Imagine we have two base vectors: ${e_1, e_2}$. Note that $e_1 \wedge e_2 = - e_2 \wedge e_1$ and we will define $e_{12} = e_1 \wedge e_2$. With those we can build any vector in the same plane as a scaled sum of them $\alpha e_1 + \beta e_2$. Then we have:
* $a = a_1 e_1 + a_2 e_2$
* $b = b_1 e_1 + b_2 e_2$
* $a \wedge b = (a_1 e_1 + a_2 e_2) \wedge (b_1 e_1 + b_2 e_2) =$ $= a_1 e_1 \wedge b_1 e_1 + a_1 e_1 \wedge b_2 e_2 + a_2 e_2 \wedge b_1 e_1 + a_2 e_2 \wedge b_2 e_2 =$ $= a_1 b_1 (e_1 \wedge e_1) + a_2 b_2 (e_2 \wedge e_2) + a_1 b_2 (e_1 \wedge e_2) + a_2 b_1 (e_2 \wedge e_1) =$ $= a_1 b_1 0 + a_2 b_2 0 + a_1 b_2 e_{12} - a_2 b_1 e_{12} =$ $= (a_1 b_2 - a_2 b_1) e_{12}$
This result $(a_1 b_2 - a_2 b_1) e_{12}$ gives you exactly the area of one parallelepiped (the one defined by a b) respect another, known one, (the one defined by $e_1$, $e_2$). This means that any 2D bivector is a multiple of any other 2D bivector (wich makes sense, actually).
If we were to go into three dimensions...
* $e_1, e_2, e_3$
* $e_1 \wedge e_2 = -e_2 \wedge e_1 = e_{12}$
* $e_2 \wedge e_3 = -e_3 \wedge e_2 = e_{23}$
* $e_3 \wedge e_1 = -e_1 \wedge e_3 = e_{31}$
* $a = a_1 e_1 + a_2 e_2 + a_3 e_3$
* $b = b_1 e_1 + b_2 e_2 + b_3 e_3$
* $a \wedge b = (a_1 e_1 + a_2 e_2 + a_3 e_3) \wedge (b_1 e_1 + b_2 e_2 + b_3 e_3) =$ $= (a_2 b_3 - a_3 b_2)e_{23} + (a_3 b_1 - a_1 b_3)e_{31} + (a_1 b_2 - a_2 b_1)e_{12}$
What this means is that any 3D bivector can be expressed as the weighted sum of three basis bivectors.
One thing I should talk about around now is that given any bivector in 2 or 3 dimensions, you could find 2 vectors that wedged will give you that bivector, but that does not hold for more than 3 dimensions. We will go over three dimensions later but you shouldn't worry about them yet. This concept (being decomposable into wedged vectors) is what brings us to the following definition:
* A n-blade is the wedged product of n vectors.
* A 0-blade is a scalar, a 1-blade is a vector, a 2-blade is a bivector,...
And yes, we have to go further. Into trivectors. But our definition of the outer product does not let us go there yet.
# Grades. Involutions. Reversions. Lots of definitions.
We start by noticing an interesting property about the inner product and the outer product of two vectors.
* $x \cdot y = \frac{1}{2}(xy + yx)$
* $x \wedge y = \frac{1}{2}(xy - yx)$
* $x \cdot y + x \wedge y = \frac{1}{2}(xy + yx) + \frac{1}{2}(xy - yx) = \frac{1}{2} xy + \frac{1}{2} yx + \frac{1}{2}xy - \frac{1}{2} yx = x y$
* If $x \cdot y = 0 \iff x y = x \wedge y = -y \wedge x = -y x$
* If $x \wedge y = 0 \iff x y = x \cdot y = y \cdot x = y x$
* So, if two vectors are orthogonal, the geometric product anticommutes. If they have the same attitude, it commutes.
* Also, the geometric product of two orthogonal vectors is equivalent to it's outer product. So the geometric product of two orthogonal vectors **is** a bivector.
We have three vectors: $a$, $b = \alpha a + b_1$ and $c = \beta a + \gamma b_1 + c_1$, where $a \cdot b_1 = a \cdot c_1 = b_1 \cdot c_1 = 0$ (This means $a, b_1, c_1$ anticommute over the geometric product).
We multiply those vectors.
$a b c = a (\alpha a + b_1) (\beta a + \gamma b_1 + c_1) = (\alpha a^2 + a b_1)(\beta a + \gamma b_1 + c_1) =$
$= \alpha \beta a^2 a + \beta a^2 b_1 + \alpha \gamma a^2 b_1 + \gamma b_1^2 a + (\alpha a^2) c_1 + a b_1 c_1$
$= ((\alpha \beta a^2 + \gamma b_1^2) a + (\beta a^2 + \alpha \gamma a^2) b_1 + (\alpha a^2) c_1) + a b_1 c_1$
As you can see, the geometric product of three vectors gives a vector (if you are unconvinced that the first part of the equation is a vector, remember that a squared vector is a scalar) plus the product of three orthogonal vectors. We already know that the geometric product of two orthogonal vectors is a bivector, so here we have a bivector multiplied by an orthogonal vector. We will call this construct a trivector.
Now we should talk about grades. The grade of something is the amount of orthogonal vectors you have to multiply together to make that something[^grades]. A vector has grade 1. A trivector grade 3, and so on.
In a sense, we can divide our $\mathcal{G}$ into the different grades $\mathcal{G}_0$, $\mathcal{G}_1$, $\mathcal{G}_2$, $\mathcal{G}_3$... Depending on how many orthogonal vectors takes to get a given element. With that we can define the grade operator $\langle \rangle_r : \mathcal{G} \to \mathcal{G}_r$ an operation that takes a multivector[^multivectors] and removes anything that is not grade r.
As an example, imagine we have the multivector $5 + 4x + 5xy$. Then the operation will be: $\langle 5 + 4x + 5xy \rangle_1 = 4x$.
Oh, by the way, $\langle \rangle$ is defined for negative r, it just results in 0.
So, a multivector is about anything. A r-vector is a multivector of only grade r. An r-versor is the geometric product of r vectors. A r-blade is the outer product of r vectors (or the geometric product of r orthogonal vectors).
* A r-blade is a r-versor and a r-vector and a multivector.
* A r-versor is a multivector but not always a r-blade nor a r-vector.
* I like to note scalars with greek leters ($\alpha, \beta, \gamma,...$).
* Vectors with lowercase latin letters ($a, b, c, x, y, z, ...$).
* Multivectors with uppercase latin leters ($A, B, C, ...$).
* R-Vectors with an uppercase latin letter with the grade as subindex ($A_r, A_0, A_4, B_r, C_r, ...$).
* Blades with boldface ($\boldsymbol{A_r}, \boldsymbol{A}, \boldsymbol{A_4}, \boldsymbol{B}, \boldsymbol{C}, ...$).
Ok, more definitions.
The grade involution: $\widehat{} : \mathcal{G} \to \mathcal{G}$ is defined as $\widehat{A} = \sum_{r}{ (-1)^r \langle A \rangle_r }$. It is an operation that negates all odd-grade elements of a given bivector.
The reversion: $\widetilde{} : \mathcal{G} \to \mathcal{G}$ is defined as $\widetilde{A} = \sum_{r}{ (-1)^\frac{k(k-1)}{2} \langle A \rangle_r }$. This is a bit more cumbersome, but the pattern here is $++ -- ++ -- ++ ...$. This operation reverses an outer product. So if $A = a_1 \wedge a_2 \wedge ... \wedge a_n \iff \widetilde{A} = a_n \wedge ... \wedge a_2 \wedge a_1$. It has it's uses.[^TheCliffordConjugation]
Some usefull properties:
Property | notes
--------------------------------------------------------------------------------|------------------
$\widehat{\alpha} = \alpha$ | $\alpha \in \mathcal{G}_0$
$\widehat{a} = -a$ | $a \in \mathcal{G}_1$
$\widehat{A B} = \widehat{A} \widehat{B}$ |
$\widehat{A + B} = \widehat{A} + \widehat{B}$ |
$\widehat{A_r} = (-1)^r A_r$ | $A_r$ is a r-vector
$\widehat{A} = \langle A \rangle_{+} - \langle A \rangle_{-}$ | $\langle \rangle_{+}$ are the even grades, $\langle \rangle_{-}$ the odd
$\widehat{A} = \boldsymbol{I} \widehat{A}^n \boldsymbol{I}^{-1}$ | $\boldsymbol{I}$ is a multivector of grade n that contains $A$ (more on this later. The concept here is "the pseudoscalar") $\widehat{A}^n$ i doing the involution n times.
$\widehat{\widehat{A}} = A$ |
$\widehat{(A^{-1})} = (\widehat{A})^{-1}$ | We havent talked about inverses yet. They exist.
$\widehat{A \rfloor B} = \widehat{A} \rfloor \widehat{B}$ | You don't know what $\rfloor$ is yet
$\widehat{A \lfloor B} = \widehat{A} \lfloor \widehat{B}$ | Same
$\widehat{A \wedge B} = \widehat{A} \wedge \widehat{B}$ |
$A \boldsymbol{I} = \boldsymbol{I} \widehat{A}^{n-1}$ | See "the pseudoscalar"
Property | notes
--------------------------------------------------------------------------------|------------------
$\widetilde{\alpha} = \alpha$ | $\alpha \in \mathcal{G}_0$
$\widetilde{a} = a$ | $a \in \mathcal{G}_1$
$\widetilde{A B} = \widetilde{B} \widetilde{A}$ |
$\widetilde{A + B} = \widetilde{A} + \widetilde{B}$ |
$\widetilde{A_r} = (-1)^{\frac{r(r-1)}{2}} A_r$ | $A_r$ is a r-vector
$\widetilde{\widetilde{A}} = A$ |
$\widetilde{(A^{-1})} = (\widetilde{A})^{-1}$ |
$\widetilde{A \rfloor B} = \widetilde{B} \lfloor \widetilde{A}$ | You don't know what $\rfloor$ is yet
$\widetilde{A \lfloor B} = \widetilde{B} \rfloor \widetilde{A}$ | Same
$\widetilde{A \wedge B} = \widetilde{B} \wedge \widetilde{A}$ |
With all of this behind us, I can give you two better definitions of the outer product:
* $a \wedge B_r = \frac{1}{2}(a B_r + \widehat{B_r} a) = \langle a B \rangle_{r+1}, \forall a \in \mathcal{G}_1, \forall B_r \in \mathcal{G}_r$
* $B_r \wedge a = \frac{1}{2}(B_r a + a \widehat{B_r}) = \langle B a \rangle_{r+1}, \forall a \in \mathcal{G}_1, \forall B_r \in \mathcal{G}_r$
* $a \wedge B = \frac{1}{2}(a B + \widehat{B} a), \forall a \in \mathcal{G}_1, \forall B \in \mathcal{G}$
* $B \wedge a = \frac{1}{2}(B a + a \widehat{B}), \forall a \in \mathcal{G}_1, \forall B \in \mathcal{G}$
* $\alpha \wedge B = B \wedge \alpha = \alpha B = B \alpha, \forall \alpha \in \mathcal{G}_0, \forall B \in \mathcal{G}$
All of that so I can tell you:
* The outer product is associative: $A \wedge (B \wedge C) = (A \wedge B) \wedge C = A \wedge B \wedge C$
* We can make outer products of general multivectors by using the fact that it is associative and distributive over the addition.
# Let's talk about trivectors
Imagine we have three vectors, but the third is just a linear combination of the other two: $a, b, c = \alpha a + \beta b$. We'd say those vectors are in the same plane. If we where to make the outer product of those three vectors, we will get:
$a \wedge b \wedge c = a \wedge (b \wedge (\alpha a + \beta b)) = a \wedge (b \wedge \alpha a + b \wedge \beta b)) = a \wedge (\alpha b \wedge a + \beta b \wedge b)) =$ $= a \wedge (\alpha b \wedge a + 0) = \alpha a \wedge b \wedge a = - \alpha b \wedge a \wedge a = - \alpha b \wedge 0 = 0$
This gives us a way to test if a vector lies within a given plane (represented by a bivector). This is a property of the outer product that works in any dimension: wedge two vectors and you'll get zero if they have the same attitude.
From there it is easy to see that trivectors are always zero in two dimensions: all vectors lie in the same plane.
But what in three dimensions? Let's do the same process we did with the bivectors:
* $e_1, e_2, e_3$
* $e_1 \wedge e_2 \wedge e_3 =$ $= -e_1 \wedge e_3 \wedge e_2= -e_2 \wedge e_1 \wedge e_3 = e_2 \wedge e_3 \wedge e_1 = e_3 \wedge e_1 \wedge e_2 = -e_3 \wedge e_2 \wedge e_1 =$ $= e_{123}$
* $a = a_1 e_1 + a_2 e_2 + a_3 e_3$
* $b = b_1 e_1 + b_2 e_2 + b_3 e_3$
* $c = c_1 e_1 + c_2 e_2 + c_3 e_3$
* $a \wedge b = (a_1 e_1 + a_2 e_2 + a_3 e_3) \wedge (b_1 e_1 + b_2 e_2 + b_3 e_3) \wedge (c_1 e_1 + c_2 e_2 + c_3 e_3) =$ $= ((a_2 b_3 - a_3 b_2)e_{23} + (a_3 b_1 - a_1 b_3)e_{31} + (a_1 b_2 - a_2 b_1)e_{12}) \wedge (c_1 e_1 + c_2 e_2 + c_3 e_3)$ $= (a_1 b_2 c_3 - a_1 b_3 c_2 + a_2 b_3 c_1 - a_2 b_1 c_3 + a_3 b_1 c_2 - a_3 b_2 c_1) e_{123}$
This, again, gives us the **volume** spaned by those three vectors (a, b and c) in respect to a base known volume unit ($e_{123}$). There is only one volume basis in three dimensions, because you can not meaningful orientate it. This single-element that spans all available space is sometimes known as the *pseudoscalar* (because it behaves like a scalar sometimes). In two dimensions the pseudoscalar is a bivector, in three dimensions it is a trivector, in four dimensions it whould be a 4-vector, and so on.
And again, al 4-vectors in three dimensions are zero. If you where to wedge 4 tree-dimensional vectors together, they will live inside the same volume, so everything will cancel out to zero.
********************************************************************************************************
* *
* *
* *
* *
* ^ *
* | *
* | *
* | *
* |c *
* | ^ *
* | / *
* | / *
* | /b *
* | / *
* | / *
* | / *
* |/ *
* *--------------------------> *
* a *
* *
* *
* *
* a *
* +--------------------------^ ^--------------------------> *
* /| /| / /| *
* / | / | / | b^a / | *
* / | / | / | .--. / | *
* / | / | b/ | | | / | *
* / | / | / | | v / | *
* / | / | / | / | *
* / | / | c / | / |-c *
* +--------------------------+ | *--------------------------+ | *
* | | | | | | | | *
* | | | | | | | | *
* | | | | | | | | *
* | | | | | | | | *
* | +------------------|-------^ | +------------------|-------v *
* | / | / | / | / *
* | / a^b | / | / | / *
* | / <--. | / | / | / *
* | / | | /b | / | / *
* | / --' | / | / | / *
* | / | / | / | / *
* |/ |/ |/ |/ *
* *--------------------------> +--------------------------+ *
* a *
* *
* a ^ b ^ c -b ^ a ^ c *
* *
* *
* *
* *
* +--------------------------^ +--------------------------^ *
* /| / /| /| *
* / | / | / | / | *
* / | / | / | / | *
* / | b / | / | / | *
* / | / | / | /-b | *
* / | / | / | a^c / | *
* / | a / | / | <--. / |c *
* ^--------------------------> | +-------------- | ---v | *
* | | | | | | --' | | *
* | | | | | | | | *
* | | c^a | | | | | | *
* | | .--. | | | | a | | *
* | +-- | | ------|-------+ | *------------------|-------> *
* c| / | v | / | / | / *
* | / | / | / | / *
* | / | / | / | / *
* | / | / | / | / *
* | / | / | / | / *
* | / | / | / | / *
* |/ |/ |/ |/ *
* *--------------------------+ +--------------------------+ *
* *
* *
* c ^ a ^ b - a ^ c ^ b *
* *
* *
* *
* a -a *
* ^--------------------------> <--------------------------^ *
* /| /| /| / *
* / | / | / | / | *
* / | / | / | / | *
* / | / | / | b/ | *
* / | / | / | / | *
* / |c / | / | / | *
* / | / | / | / | *
* +--------------------------+ | +--------------------------^ | *
* | | | | | | | | *
* | ^ | | | | | | .--> | *
* | b^c| | | | | | | |c^b | *
* | | | | | | | | | | *
* | --' ^------------------|-------+ | +------------------|-------+ *
* | / | / | / | / *
* | / | / | / |c / *
* | / | / | / | / *
* | /b | / | / | / *
* | / | / | / | / *
* | / | / | / | / *
* |/ |/ |/ |/ *
* *--------------------------+ +--------------------------* *
* *
* *
* b ^ c ^ a - c ^ b ^ a *
* *
* *
* *
********************************************************************************************************
6 Different ways to build the same trivector. Note that from inside of it the first bivector always looks the same (counter-clockwise). Or from the outside, it always looks clockwise. This convention is known as handeness. This is a good way to test if a vector goes into a bivector or away from it: you have to choose a convention, wedge the vector and bivector, and compare the sign of the trivector against your "standard" one[^TrivectorTrick]. Even 0 will tell you that the vector is contained in the bivector.
All of this means that an outer product really is a operation $\wedge : \mathcal{G}_n \times \mathcal{G}_m \to \mathcal{G}_{n + m}$. If you were to go to a grade greater that your number of dimensions, it resolves to zero. That also means that the outer product isn't always anticonmutative, a vector wedged with a bivector is, in fact, conmutative: $ a \wedge (b \wedge c)= $ $= a \wedge b \wedge c =$ $= -b \wedge a \wedge c =$ $= b \wedge c \wedge a =$ $= (b \wedge c) \wedge a$
Of course, you can extend it into general multivectors ($\wedge : \mathcal{G} \times \mathcal{G} \to \mathcal{G}$). That does not make much sense geometrically, but nobody can stop you.
All in all we can give the rules for the full outer product, or wedge product:
1. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies a \wedge B = \frac{1}{2}(a B + \widehat{B} a)$
2. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies a \wedge B = \widehat{B} \wedge a$
3. $\forall A, B, C \in \mathcal{G} \implies A \wedge B \wedge C = (A \wedge B) \wedge C = A \wedge (B \wedge C)$
4. $\forall a, c \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies a \wedge B \wedge c = -c \wedge B \wedge a$
# What do blades represent?
Subspaces!
What is a subspace, though? It's a defined fragment of another space. Vectors are 1D subspaces. Imagine we have a vector $x$. This vector will represent line $a$. That line is the set of all vectors $v \in a \iff v = \alpha x, \exists \alpha \in \Real$.
If vectors represent 1D subspaces, then bivectors will represent 2D subspaces. Those are planes within a space. Remember than in 2D all vectors pertain to the same plane, while in 3 or more dimensions, a vector can or can not be in a given plane. So how do we check that. Imagine we have two vectors, $x$ and $y$ that lie on the plane $B$ and do not pertain to the same line, $\nexists \alpha \in \Real \implies y = \alpha x$. Then the bivector $x \wedge y$ will represent the set of all vectors $v \in B \iff v = \alpha x + \beta y, \exists \alpha, \beta \in \Real \iff v \wedge a \wedge b = 0$.
And this thing goes on and on and on to any amount of dimensions you can ever need.
When you try to wedge blades of 2 or more dimensions it introduces a new subtilery: They will become 0 if (and only if) you can factor at least one common vector from them. This only shows up from 4 dimensions and up, in 3 dimensions all bivectors share at least one common vector, so wedging them is always zero.
So, a blade represents a subspace.
But there is a problem.
This interpretation of blades completely ignores the weight and the orientation of them. Any blade multiple with another blade will represent the same subspace. You can not represent "points" with this system, as all vectors that are multiple are equivalent.
The truth is that you can interpret a vector as a point. And that can get you really far. But there are better ways that let you do some cool trick with GA. You can have a way to represent a point using a subspace.
I'll talk about that later, as we have a lot of work to do yet.
# Back to the geometric product. Projection and Rejection to a vector.
A tiny detail I've teased but I have not talked yet is the idea of an inverse.
An inverse is an element that, multiplied by another element, gives you 1.
A vector squared, as you should know by now, gives you a scalar. Scalars are great, because they have easy inverses: $\frac{1}{s}$. This leads to the following formula: $v^{-1} = \frac{v}{v^2}$ for $v \in \mathcal{G}_1$ and $v^2 \neq 0$. This is trivialy checked with $v v^{-1} = v \frac{v}{v^2} = v v \frac{1}{v^2} = v^2 \frac{1}{v^2} = 1$. It even works if you put it on the other side! $v^{-1} v = 1$.
This idea works beyond vectors. All non null blades have inverses[^NoInversesYes]. Non nulls as in "if you square them you don't get zero". That's true: squared blades *also* give you scalars.
Wich brings me to a **very** interesting point.
If we start with two unit and orthogonal vectors $x$ and $y$. $x^2 = 1, y^2 = 1, x \cdot y = 0$ and we make a bivector by wedging them $I = x \wedge y$ what is the value of $I^2$? Well, $I^2 = (x \wedge y) (x \wedge y) =$[^DirtyTrick] $= (x y) (x y) =$ $= x y x y =$[^DirtyTrick] $=-x y y x =$ $= -x y^2 x =$ $= -y^2 x x =$ $= -y^2 x^2 =$ $= I^2 = -1$.
So the square of the unit bivector is $-1$. No, I didn't name it $I$ for no reason.[^ComplexNumbers]
That's nice, and it lets you do some fun stuff.
Let's start with two vectors, $x, a$. Now look at this:
$x = x a a^{-1} = (x \cdot a) a^{-1} + (x \wedge a) a^{-1}$ [^TheGeomProdIdentity]
The result is equals to $x$, a vector, but decomposed in two parts: $(x \cdot a) a^{-1}$ and $(x \wedge a) a^{-1}$. I'll call the first $P_a(x)$ and the second $R_a(x)$. I'll tell you why in a moment, first just see:
$P_a(x) \wedge a = ((x \cdot a) a^{-1}) \wedge a = ((x \cdot a) \frac{a}{a^2}) \wedge a = (((x \cdot a) \frac{1}{a^2}) a) \wedge a =$[^RememberScalars] $= ((x \cdot a) \frac{1}{a^2}) (a \wedge a) = 0$
$R_a(x) \cdot a = ((x \wedge a) a^{-1}) \cdot a = ((x \wedge a) \frac{a}{a^2}) \cdot a =$[^DotDefinition] $= \frac{1}{2}(((x \wedge a) \frac{a}{a^2}) a + a ((x \wedge a) \frac{a}{a^2})) =$[^WedgeDefinition] $= \frac{1}{2}((\frac{1}{2}(x a - a x) \frac{a}{a^2}) a + a \frac{1}{2}((x a - a x) \frac{a}{a^2})) =$ $= \frac{1}{4 a^2}(((x a - a x) a) a + a ((x a - a x) a)) =$ $= \frac{1}{4 a^2}((x a a - a x a) a + a (x a a - a x a)) =$ $= \frac{1}{4 a^2}(x a a a - a x a a + a x a a - a a x a) =$ $= \frac{1}{4 a^2}(a^2 x a - a^2 a x + a^2 a x - a^2 x a) =$ $= \frac{1}{4 a^2}((a^2 x a - a^2 x a) + (a^2 a x - a^2 a x)) =$ $= 0$
Thats a good wall of formulas.
Anyway, $P_a(x) \wedge a = 0$, wich means $P_a(x)$ is a vector in the same direction as $a$. And $R_a(x) \cdot a = 0$, wich means it is a vector orthogonal to $a$. And those vectors, added, give $x = P_a(x) + R_a(x)$ We call those operations the Projection and the Rejection of $x$ to $a$.
********************************************************************************************************
* *
* projection x *
* ^ --- --- ^ *
* | / *
* | / *
* | / | *
* | / | *
* | / *
* | / *
* / | *
* / | *
* a / *
* ^ / *
* | / | *
* | / | *
* | / *
* |/ *
* *--------------> *
* rejection *
* *
********************************************************************************************************
# Reflections
Having a quite easy formula to project $P_a(x) = (x \cdot a) a^{-1}$ and reject $R_a(x) = (x \wedge a) a^{-1}$ gives us freedom to try something a bit more complicated: a reflection.
**********************************************************************************************************
* *
* projection reflection x *
* ^ ^ --- --- -+- --- --- ^ *
* | \ | / *
* | \ | / *
* | \ / *
* | \ / *
* | \ / *
* | \ / *
* | \ / *
* | \ / *
* | \ a / *
* | \ ^ / *
* | \ | / *
* | \ | / *
* | \ | / *
* | \|/ *
* | * *
* *
* *
* <--------------- *
* -rejection *
* *
* *
* *
**********************************************************************************************************
If you look carefully, you will notice that a reflection is taking the projected vector and negating the rejected vector. So: $Reflect_a(x) = P_a(x) - R_a(x)$. Can we go further?
$P_a(x) - R_a(x) = (x \cdot a) a^{-1} - (x \wedge a) a^{-1} =$[^DotConmutative] $= (a \cdot x) a^{-1} - (x \wedge a) a^{-1} =$[^WedgeAnticonmutative] $= (a \cdot x) a^{-1} + (a \wedge x) a^{-1} =$ $= (a \cdot x + a \wedge x) a^{-1} =$[^TheGeomProdIdentity] $= a x a^{-1}$
Turns out the formula to make reflections is *more easy* that the one that makes projections and reflections. In fact, this formula is so important in Geometric Algebra that it is known as **the sandwitch product**.
# The Contraction, The Scalar Product, Projections and Angles.
There are some products still left. The three I'll talk in this section come from the fact that the dot product we've been using so far only works with vectors, and we'd like to extend that a bit.
First I'll tell you what the contraction does *geometricaly* and then I'll show you how it is defined.
The thing about contraction is that it's some kind of reverse outer product. Some kind. Because a projection is involved.
I'll start with the left contraction. $\rfloor : \mathcal{G}_n \times \mathcal{G}_m \to \mathcal{G}_{m - n}$. We say $A \rfloor B$ is the contraction of $A$ onto $B$. It kind of substracts $A$ from $B$. But in order to remove something from somewhere else, first they have to have something in common, that's where the projection comes to play.
$P_B(A) \wedge (A \rfloor B) = A^2 B$
Please note that the former formula only makes sense when $A$ and $B$ are blades.[^BladeProjection]
It's easier to understand with vectors and bivectors:
************************************************************************************************
* *
* *
* *
* *
* bivector *
* +-------------------------------------------+ *
* / / *
* / <--. / *
* / vector | / *
* / ^ | / *
* / \ --' / *
* / \ / *
* / | \ contraction / *
* / | *-------------> / *
* / | / / *
* / / / *
* / / / *
* / v / *
* / projection / *
* / / *
* +-------------------------------------------+ *
* *
* *
* *
************************************************************************************************
The easiest formula for the contraction is: $A_n \rfloor B_m = \langle A B \rangle_{m - n}$. But that is hard to operate with, so:
1. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G}_m \implies a \rfloor B = \frac{1}{2}(a B - \widehat{B} a)$
2. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies a \rfloor B = -\widehat{B} \lfloor a$
3. Upps... what is $\lfloor$?
If $A \rfloor B$ "substracts" $A$ from $B$, then $A \lfloor B$ "substracts" $B$ from $A$. It kind of works the same way, just changing the operators meaning. That's why the symbol is not symetric: you have to indicate wich substracts from wich. Also: $(A \lfloor B) \wedge P_A(B) = B^2 A$. Ejem:
1. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G}_m \implies a \rfloor B = \frac{1}{2}(a B - \widehat{B} a)$
2. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G}_m \implies B \lfloor a = \frac{1}{2}(B a - a \widehat{B})$
3. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies a \rfloor B = -\widehat{B} \lfloor a$
4. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies a B = a \rfloor B + a \wedge B$
5. $\forall a \in \mathcal{G}_1, \forall B \in \mathcal{G} \implies \widehat{B} a = \widehat{B} \lfloor a + \widehat{B} \wedge a = -a \rfloor B + a \wedge B$
6. $\forall a \in \mathcal{G}_1, \forall B, C \in \mathcal{G} \implies a \rfloor (B C) = (a \rfloor B)C + \widehat{B}(a \rfloor C) = (a \wedge B)C - \widehat{B}(a \wedge C)$
7. $\forall a \in \mathcal{G}_1, \forall B, C \in \mathcal{G} \implies a \wedge (B C) = (a \wedge B)C - \widehat{B}(a \rfloor C) = (a \rfloor B)C + \widehat{B}(a \wedge C)$
8. $\forall a \in \mathcal{G}_1, \forall B, C \in \mathcal{G} \implies a \rfloor (B \wedge C) = (a \rfloor B) \wedge C + \widehat{B}(a \rfloor C)$
9. $\forall a \in \mathcal{G}_1, \forall B, C \in \mathcal{G} \implies a \wedge (B \rfloor C) = (a \wedge B) \rfloor C - \widehat{B} \rfloor (a \rfloor C)$
10. $\forall a \in \mathcal{G}_1, \forall B, C \in \mathcal{G} \implies a \wedge (B \lfloor C) = (a \wedge B) \lfloor C - \widehat{B} \lfloor (a \rfloor C)$
11. $\forall A, B, C \in \mathcal{G} \implies A \rfloor (B \lfloor C) = (A \rfloor B) \lfloor C$
11. $\forall A, B, C \in \mathcal{G} \implies A \rfloor (B \rfloor C) = (A \wedge B) \rfloor C$
11. $\forall A, B, C \in \mathcal{G} \implies A \lfloor (B \wedge C) = (A \lfloor B) \lfloor C$
Please note that the contraction is not associative: $A \rfloor (B \rfloor C) \neq (A \rfloor B) \rfloor C$. In fact, both of them mean different things.
* $A \rfloor (B \rfloor C) = (A \wedge B) \rfloor C) :$ the subspace of $C$ that is perpendicular to both $A$ and $B$.
* $(A \rfloor B) \rfloor C = A \wedge (B \rfloor C) \iff P_C(A) = A :$ we replace $B$ per $A$ inside of $C$. It only works when $A$ is already a part of $C$.
I wanted to take your attention to those formulas because they will become important later. They are known as the *dualy formulas* and they're handy when working with duals.
## Interpreting the contraction
The contraction is a complex geometric operation, even though it is relatively simple algebraically.
* $A \rfloor B$ is a blade if $A$ and $B$ are blades.
* $A \rfloor B$ is ortogonal to $A$ and contained into $B$. I other words: "the part of $B$ ortogonal to $A$".
* $||A \rfloor B|| = \cos{\phi} ||A|| ||B||$. Even though we don't have definitions on the weight of blades, and we don't have a definition on what is an angle. I can count of your intuition for now.
* $A \rfloor B = 0$ if $A$ and $B$ are ortogonal.
* I'm not writing anything on the $\lfloor$ because it's basically symetric.
## Full Projections
Now we can do projections (and rejections!) on arbitrary blades:
* $P_B(A) = (A \rfloor B) B^{-1}$. The projection of the blade $A$ into the blade $B$. It will be zero if
* $A$ and $B$ are ortogonal.
* $A$'s grade is greater than $B$'s grade.
* $R_B(A) = (A \wedge B) B^{-1}$. The rejection of the blade $A$ into the blade $B$.
* Both $P_B(A)$ and $R_B(A)$ will be of the same grade as $A$.
* $P_B(A)$ will have the same attitude as $B$.
* $R_B(A)$ will be ortogonal to $B$.
That's pretty much it.
## The Scalar Product.
There's a special case when you make the contraction with 2 blades of the same grade. Their result will be a scalar and the product will always be commutative. We call this special case the scalar product $\ast : \mathcal{G}_n \times \mathcal{G}_n \to \mathcal{G}_0$. This properties makes it easier to define the following:
* The module of a blade: $||A||^2 = A \ast \widetilde{A}$
* Handy commutativity formulas: $A \ast B = B \ast A = \widetilde{A} \ast \widetilde{B} = \widetilde{A} \ast \widetilde{B}$
* The angle between two blades: $\cos{\phi} = \frac{A \ast \widetilde{B}}{||A|| ||B||}$
* The scalar product is equivalent to the left contraction ($\rfloor$), the right contraction ($\lfloor$) and the dot product ($\cdot$).
## What is an angle?
Turns out defining "angle" is a bit tricky. An easy definition is "the measure on how much you need to rotate something to have the same attitude as another thing". This is a nice definition except we can't rotate yet.
We'll start with the definition we gave you earlier ($\cos{\phi} = \frac{A \ast \widetilde{B}}{||A|| ||B||}$) and two unit vectors.
* The formula for two vectors is: $\cos{\phi} = \frac{a \cdot b}{||a|| ||b||}$. If they are unit vectors $a^2 = 1, b^2= 1$ then $\cos{\phi} = a \cdot b$.
* If the two vectors point to the same direction, we'll say their angle is zero. $\cos{\phi} = a \cdot b = 1 \iff \phi = 0$
* If the two vectors have the same attitude but oposide orientation, it's angle is $\pi$. $\cos{\phi} = a \cdot b = -1 \iff \phi = \pi$
* If $a \wedge b \wedge x = 0$ and $\cos{\phi} = a \cdot x = b \cdot x$, then $\cos{2 \phi} = a \cdot b$.
* In other words, if three vectors are on the same plane and the angle between two pairs is the same, then the angle between the other pair will be double the first angle.
* As a result, the angle between orthogonal vectors is $\frac{\pi}{2}$. $\cos{\phi} = a \cdot b = 0 \iff \phi = \frac{\pi}{2}$
If we try to find a function that has all of those properties we can, indeed define one:
$\cos{\phi} = \sum_{n = 0}^{\infty} \frac{(-1)^n \phi^{2 n}}{(2 n)!} = 1 - \frac{\phi^2}{2!} + \frac{\phi^4}{4!} - \frac{\phi^6}{6!} + ...$
This is all nice, but it doesn't give us a sense of *direction*, the angle from $a$ to $b$ is the same as the one from $b$ to $a$. And that's ok... but we can do better. We have an operation that gives us a sense of direction: the wedge product!
Now the problem is that we have to choose a "positive" direction and a negative direction, and no matter how hard you think on that problem, the selection is always arbitrary. You can say "this angle goes in the opposite direction as that other angle", you can try make a convention, but you'll always have to check any use of it against the convention used.
Here's a trick you can use:
* You are taking the angle with three vectors $a, b, c$ on the same plane $a \wedge b \wedge c = 0$.
* You are comparing the angles of $a$ to $b$ and $a$ to $c$.
* You select $a$ to $b$ to be the positive angle.
* You can define the sinus as $sin{\phi} = v \cdot R_a(b)$.
* This means that you are taking a reference direction, perpendicular to $a$. If the vector you're taking the sinus from is going to the same direction, it will be positive, if not it will be negative.
* Another way to do the same is to compare the bivectors. $\frac{a \wedge b}{||a|| ||b||}$'s weight will be proportional to the $\sin{}$, so you can compare against $\frac{a \wedge c}{||a|| ||c||}$'s sign.
************************************************************************************************
* *
* *
* sine (s · perpendicular a) *
* | *
* | perpendicular a *
* *-------->---------------> *
* |\ | *
* | \ | *
* | \ | *
* | /\ | *
* | / \ | *
* |/ \ | *
* |angle \ | *
* cosine | \ *
* (c · a) -----v----- v *
* | b *
* | *
* v *
* a *
* *
* *
************************************************************************************************
We can define the sinus with the following formula:
$\sin{\phi} = \sum_{n = 0}^{\infty} \frac{(-1)^n \phi^{2 n + 1}}{(2 n + 1)!} = \phi - \frac{\phi^3}{3!} + \frac{\phi^5}{5!} - \frac{\phi^7}{7!} + ...$
And we have some nice properties:
* $\sin{\phi} = \cos{\phi + \frac{\pi}{2}}$
* $(\cos{\phi})^2 + (\sin{\phi})^2 = 1$
* $\cos{(2 \phi)} = (\cos{\phi})^2 - (\sin{\phi})^2$
* $\sin{(2 \phi)} = 2 \cos{\phi} \sin{\phi}$
Thengle between vectors is easy to understand, but what about the angle between two bivectors?
* $A = a_1 \wedge a_2$
* $B = b_1 \wedge b_2$
* $\cos{\phi} = \frac{(a_1 \wedge a_2) \ast (b_2 \wedge b_1)}{||A|| ||B||}$
* If we can find a common vector on both $A$ and $B$:
* $A = a \wedge c$
* $B = b \wedge c$
* $c^2 = 1$
* $a \cdot c = 0, b \cdot c = 0$
* $\cos{\phi} = \frac{(a \wedge c) \ast (c \wedge b)}{\sqrt{(a \wedge c) \ast (c \wedge a)} \sqrt{(b \wedge c) \ast (c \wedge b)}} =$ $= \frac{(a \cdot b) c^2}{\sqrt{a^2 c^2} \sqrt{b^2 c^2}} =$ $= \frac{a \cdot b}{\sqrt{a^2} \sqrt{b^2}} =$ $= \cos{\phi} = \frac{a \cdot b}{||a|| ||b||}$
* The angle between bivectors is equal to the angle of the two vectors that are not common.
* If there is no common factor, then $\cos{\phi} = 0$ and we say that those bivectors are orthogonal.
* We can extend those ideas into trivectors and beyond. It always boils down to get common factors until we have at most one different vector from each blade and take the angle from those.
# Rotations
And now for the party trick.
For by a well-known theorem[^CitationNeeded], any rotation can be represented as an even number of reflections.
So if $a x a^{-1}$ reflects $x$ through $a$, then $b a x a^{-1} b^{-1}$ has to do some kind of rotation.
And it does, I'll show you in a minute. First I want to make some simplifications. Imagine we take $a$ and $b$ so that $a^2 = 1$ and $b^2=1$ this is not needed, but makes things easier through $a^{-1} = \frac{a}{a^2} = \frac{a}{1} = a$ and the same with $b$.[^NonEuclideanRotations]
With that we can define a Rotor $R = b a$. That rotor is a *versor* that is: the geometric product of vectors. Rotors are usualy the addition of a scalar and a bivector, because that is what you usualy get when you multiply two vectors (sometimes you get just a scalar, if they are parallel, or just a bivector, if they are orthogonal. But you can think that the scalar or the bivector to be zero[^ZeroIsEverything] and there you have: It always is the sum of a scalar and a bivector!).
Now I'll take the projection and rejection of $b$ and have the following definitions:
* $b = P_a(b) + R_a(b) = b_{\parallel} + b_{\perp}$
* $\cos{\phi} = a \cdot b$
* $b_{\parallel} = P_a(b) = (b \cdot a) a = \cos{\phi} a = \textbf{c} a$
* $b_{\perp} = R_a(b) = \sin{\phi} a_{perp}$, where $a_{perp}$ is a vector in the same plane as $a$ and $b$, orthogonal to $a$ and in the same "direction" as $b$ in respect to $a$.
Ok.
Now we'll do the same with $x$.
* $x = x_{\uparrow} + x_{\parallel} + x_{\perp}$
* $x_{\uparrow}, x_{\parallel}, x_{\perp}$ are orthogonal.
* $x_{\uparrow} \cdot x_{\parallel} = 0$
* $x_{\uparrow} \cdot x_{\perp} = 0$
* $x_{\perp} \cdot x_{\parallel} = 0$
* $x_{\uparrow}$ is orthogonal to the plane $a$ and $b$ form. It is, in fact, it's rejection:
* $R_{a \wedge b}(x) = x_{\uparrow}$
* $x_{\uparrow} \cdot a = 0$
* $x_{\uparrow} \cdot a_{\perp} = 0$
* $x_{\parallel} + x_{\perp}$ is the projection of $x$ in the plane formed by $a$ and $b$.
* $P_{a \wedge b}(x) = x_{\parallel} + x_{\perp}$
* $x_{\parallel}$ is parallel to $a$.
* $x_{\parallel} \wedge a = 0$
* $x_{\perp}$ is parallel to $a_{\perp}$.
* $x_{\perp} \wedge a_{\perp} = 0$
* Usefull properties we will use:
* $a^2 = 1$
* $a_{\perp}^2 = 1$
* $a a_{\perp} = -a_{\perp} a = a \wedge a_{\perp} = -a_{\perp} \wedge a$
* $x_{\perp} a_{\perp} = a_{\perp} x_{\perp} = x_{\perp} \cdot a_{\perp} = ||x_{\perp}||$
* $x_{\parallel} a = a x_{\parallel} = x_{\parallel} \cdot a = ||x_{\parallel}||$
* $x_{\parallel} a_{\perp} = -a_{\perp} x_{\parallel} = x_{\parallel} \wedge a_{\perp} = -a_{\perp} \wedge x_{\parallel}$
* $x_{\uparrow} a = -a x_{\uparrow} = x_{\uparrow} \wedge a = -a \wedge x_{\uparrow}$
* $x_{\uparrow} a_{\perp} = -a_{\perp} x_{\uparrow} = x_{\uparrow} \wedge a_{\perp} = -a_{\perp} \wedge x_{\uparrow}$
* $(\cos{\phi})^2 + (\sin{\phi})^2 = 1$
* $\cos{(2 \phi)} = (\cos{\phi})^2 - (\sin{\phi})^2$
* $\sin{(2 \phi)} = 2 \cos{\phi} \sin{\phi}$
## The monster:
$R x \widetilde{R} =$ $b a x a^{-1} b^{-1} =$ $= (\textbf{c} a + \textbf{s} a_{\perp}) a (x_{\uparrow} + x_{\parallel} + x_{\perp}) a (\textbf{c} a + \textbf{s} a_{\perp}) =$ $= (\textbf{c} a a + \textbf{s} a_{\perp} a) (x_{\uparrow} + x_{\parallel} + x_{\perp}) (\textbf{c} a a + \textbf{s} a a_{\perp}) =$ $= (\textbf{c} - \textbf{s} a a_{\perp}) (x_{\uparrow} + x_{\parallel} + x_{\perp}) (\textbf{c} + \textbf{s} a a_{\perp}) =$ $= (\textbf{c} x_{\uparrow} - \textbf{s} a a_{\perp} x_{\uparrow} + \textbf{c} x_{\parallel} - \textbf{s} a a_{\perp} x_{\parallel} + \textbf{c} x_{\perp} - \textbf{s} a a_{\perp} x_{\perp}) (\textbf{c} + \textbf{s} a a_{\perp}) =$
$= \textbf{c}^2 x_{\uparrow} - \textbf{c} \textbf{s} a a_{\perp} x_{\uparrow} + \textbf{c}^2 x_{\parallel} - \textbf{c} \textbf{s} a a_{\perp} x_{\parallel} + \textbf{c}^2 x_{\perp} - \textbf{c} \textbf{s} a a_{\perp} x_{\perp}$ $+ \textbf{c} \textbf{s} x_{\uparrow} a a_{\perp} - \textbf{s}^2 a a_{\perp} x_{\uparrow} a a_{\perp} + \textbf{c} \textbf{s} x_{\parallel} a a_{\perp} - \textbf{s}^2 a a_{\perp} x_{\parallel} a a_{\perp} + \textbf{c} \textbf{s} x_{\perp} a a_{\perp} - \textbf{s}^2 a a_{\perp} x_{\perp} a a_{\perp} =$
$= \textbf{c}^2 x_{\uparrow} + \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} + \textbf{c}^2 x_{\parallel} + \textbf{c} \textbf{s} a x_{\parallel} a_{\perp} + \textbf{c}^2 x_{\perp} - \textbf{c} \textbf{s} a ||x_{\perp}||$ $- \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} - \textbf{s}^2 a x_{\uparrow} a_{\perp} a_{\perp} a + \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} - \textbf{s}^2 a x_{\parallel} a_{\perp} a_{\perp} a - \textbf{c} \textbf{s} x_{\perp} a_{\perp} a + \textbf{s}^2 a x_{\perp} a_{\perp} a_{\perp} a =$
$= \textbf{c}^2 x_{\uparrow} + \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} + \textbf{c}^2 x_{\parallel} + \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} + \textbf{c}^2 x_{\perp} - \textbf{c} \textbf{s} ||x_{\perp}|| a$ $- \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} + \textbf{s}^2 x_{\uparrow} a a + \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} - \textbf{s}^2 x_{\parallel} a a - \textbf{c} \textbf{s} ||x_{\perp}|| a - \textbf{s}^2 x_{\perp} a a =$
$= \textbf{c}^2 x_{\uparrow} + \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} + \textbf{c}^2 x_{\parallel} + \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} + \textbf{c}^2 x_{\perp} - \textbf{c} \textbf{s} ||x_{\perp}|| a$ $- \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} + \textbf{s}^2 x_{\uparrow} + \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} - \textbf{s}^2 x_{\parallel} - \textbf{c} \textbf{s} ||x_{\perp}|| a - \textbf{s}^2 x_{\perp} =$
$= \textbf{c}^2 x_{\uparrow} + \textbf{s}^2 x_{\uparrow}$ $+ \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp} - \textbf{c} \textbf{s} a x_{\uparrow} a_{\perp}$ $+ \textbf{c}^2 x_{\parallel} - \textbf{s}^2 x_{\parallel}$ $- \textbf{c} \textbf{s} ||x_{\perp}|| a - \textbf{c} \textbf{s} ||x_{\perp}|| a$ $+ \textbf{c}^2 x_{\perp} - \textbf{s}^2 x_{\perp}$ $+ \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} + \textbf{c} \textbf{s} x_{\parallel} =$
$= (\textbf{c}^2 + \textbf{s}^2) x_{\uparrow}$ $+ (\textbf{c}^2 - \textbf{s}^2) x_{\parallel}$ $- 2 \textbf{c} \textbf{s} ||x_{\perp}|| a$ $+ (\textbf{c}^2 - \textbf{s}^2) x_{\perp}$ $+ 2 \textbf{c} \textbf{s} ||x_{\parallel}|| a_{\perp} =$
$= (\textbf{c}^2 + \textbf{s}^2) x_{\uparrow}$ $+ (\textbf{c}^2 - \textbf{s}^2) (x_{\parallel} + x_{\perp})$ $+ 2 \textbf{c} \textbf{s} (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a) =$
$= ((\cos{\phi})^2 + (\sin{\phi})^2) x_{\uparrow}$ $+ ((\cos{\phi})^2 - (\sin{\phi})^2) (x_{\parallel} + x_{\perp})$ $+ 2 (\cos{\phi}) (\sin{\phi}) (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a) =$
$= x_{\uparrow}$ $+ \cos{(2 \phi)} (x_{\parallel} + x_{\perp})$ $+ \sin{(2 \phi)} (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$
## The result of a rotation
That is a lot to take in. So, let me rewrite the result and decompose it a bit:
$R x \widetilde{R} =$ $= x_{\uparrow} + \cos{(2 \phi)} (x_{\parallel} + x_{\perp}) + \sin{(2 \phi)} (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ $= R_{a \wedge b}(x) + \cos{(2 \phi)} P_{a \wedge b}(x)$ $+ \sin{(2 \phi)} (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$
The result is obviously a vector, and it's the sum of three vectors, wich are multiples of $P_{a \wedge b}(x)$, $R_{a \wedge b}(x)$ and $(||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$. We could easily have obtained the first two vectors, the projection and the rejection, but the third has appeared out of thin air.
Looking further, we see that the rejection has been left intact. That is good, that is what a rotation should do: not modify the component orthogonal to the plane of rotation.
Now about $(||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$.
* It lies in the same plane as $a$ and $b$, as it is the weighted sum of $a$ and $a_{\perp}$, which are in that plane.
* It is ortogonal to $P_{a \wedge b}(x)$ : $(x_{\parallel} + x_{\perp}) \cdot (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ $= (||x_{\parallel}|| a + ||x_{\perp}|| a_{\perp}) \cdot (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ $= ||x_{\parallel}|| ||x_{\perp}|| - ||x_{\parallel}|| ||x_{\perp}|| = 0$
* $P_{a \wedge b}(x)$ to $(||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ is the same "direction" as $a$ to $a_{\perp}$.
* $(||x_{\parallel}|| a + ||x_{\perp}|| a_{\perp}) \wedge (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ $= ||x_{\parallel}||^2 a a_{\perp} - ||x_{\perp}||^2 a_{\perp} a$ $= ||x_{\parallel}||^2 a a_{\perp} + ||x_{\perp}||^2 a a_{\perp}$ $= (||x_{\parallel}||^2 + ||x_{\perp}||^2) a \wedge a_{\perp}$ $= ||P_{a \wedge b}(x)||^2 a \wedge a_{\perp}$
* In fact, $x \rfloor (a \wedge a_{\perp})$ $= (x_{\uparrow} + x_{\parallel} + x_{\perp}) \rfloor (a a_{\perp})$ $= x_{\uparrow} \rfloor (a a_{\perp}) + x_{\parallel}\rfloor (a a_{\perp}) + x_{\perp} \rfloor (a a_{\perp})$ $= 0 + x_{\parallel} a a_{\perp} - x_{\perp} a_{\perp} a$ $= || x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a$
* $(||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ is the contraction of $x$ on the unit bivector formed by $a$ and $b$.
* As such, it is into the $a \wedge b$ plane (as we've already shown) and is orthogonal to $x$.
* In conclusion, $(||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$ is $P_{a \wedge b}(x)$ rotated 90º ($\frac{\pi}{2}$) in the same direction as $a$ to $b$.
************************************************************************************************
* *
* *
* *
* *
* rejected *
* x ^ *
* ^ | *
* \ | *
* \ | *
* \ | *
* \ | *
* \| contracted *
* *----------------> *
* / *
* / ^ *
* / a ^ b | *
* / plane | a to b *
* / --------' direction *
* v *
* projected *
* *
* *
************************************************************************************************
To recap a bit.
* $R x \widetilde{R}$ $= R_{a \wedge b}(x) + \cos{(2 \phi)} P_{a \wedge b}(x)$ $+ \sin{(2 \phi)} (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$
* $\phi$ is the angle from $a$ to $b$
* R_{a \wedge b}(x) is the part of $x$ that is "outside" of the $a \wedge b$ plane. It is unmodified by the rotation.
* When $\phi$ is $0$, the result is $R_{a \wedge b}(x) + P_{a \wedge b}(x) = x$
* When $\phi$ is $\frac{\pi}{2}$ ($a$ and $b$ are orthogonal), the result is $R_{a \wedge b}(x) - P_{a \wedge b}(x)$. The rotation has done a 180º turn.
* When $\phi$ is $\frac{\pi}{4}$, the result is $R_{a \wedge b}(x) + (||x_{\parallel}|| a_{\perp} - ||x_{\perp}|| a)$. The rotation did a 90º turn in the direction of $a$ to $b$.
I hope I have convinced you now that a rotor rotates. And it does by double the angle of the vectors we used to create the rotor.
A think to note is that, once the rotor has been created, the original vectors do not matter anymore: the rotor will rotate along a plane (a bivector?) and any two vectors with that given angle in that plane whould result in the same rotor.
## The exponential.
Let me define a handy function.
$e^{A} = \sum_{n = 0}^{\infty} \frac{A^n}{n!} = 1 + A + \frac{A^2}{2!} + \frac{A^3}{3!} + \frac{A^4}{4!} + ...$
If we put a $\Real$ in this function, this is our old exponential. But we want to make fun things with it.
Like putting things that square to a negative number.
* $I^2 = -||I||^2$
$e^{I} = \sum_{n = 0}^{\infty} \frac{I^n}{n!} = 1 + I + \frac{I^2}{2!} + \frac{I^3}{3!} + \frac{I^4}{4!} + \frac{I^5}{5!} + \frac{I^6}{6!} + ...$ $= 1 + I - \frac{||I||^2}{2!} - \frac{||I||^2 I}{3!} + \frac{||I||^4}{4!} + \frac{||I||^4 I}{5!} - \frac{-||I||^6}{6!} + ...$ $= (1 - \frac{||I||^2}{2!} + \frac{||I||^4}{4!} - \frac{-||I||^6}{6!} + ...) + (I - \frac{||I||^2 I}{3!} + \frac{||I||^4 I}{5!} + ...)$ $= (1 - \frac{||I||^2}{2!} + \frac{||I||^4}{4!} - \frac{-||I||^6}{6!} + ...) + (||I|| - \frac{||I||^3}{3!} + \frac{||I||^4}{5!} - ...) \frac{I}{||I||}$ $= (\sum_{n = 0}^{\infty} \frac{(-1)^n ||I||^{2 n}}{(2 n)!}) + (\sum_{n = 0}^{\infty} \frac{(-1)^n ||I||^{2 n + 1}}{(2 n + 1)!}) \frac{I}{||I||}$ $= \cos{(||I||)} + \sin{(||I||)} \frac{I}{||I||}$
Ooooops.
This is a quite known formula, in fact, it is sometimes called "Euler's formula"[^EulerIdentity]. But euler's formula only works with $\Complex$, this is a bit more complete.
In case we need it later[^NeedLater], by the way:
* $e^A = \cos \alpha + \frac{A \sin \alpha}{\alpha} \iff A^2 = -\alpha^2$
* $e^A = 1 +A \iff A^2 = 0$
* $e^A = \cosh \alpha + \frac{A \sinh \alpha}{\alpha} \iff A^2 = \alpha^2$
Why am I explaining those things to you? Well, turns out representing rotors with an exponential is quite a handy thing to have.
* $R = \cos{\frac{\phi}{2}} - \sin{\frac{\phi}{2}} I = e^{-\frac{\phi}{2} I}$, where $\phi$ is the angle and $I$ the plane of rotation in the form of a unit bivector.
* A rotation is performed by $R x \widetilde{R} = e^{-\frac{\phi}{2} I} x e^{\frac{\phi}{2} I}$.
* If $x$ is in the plane $I$, $x \wedge I = 0$ then $e^{-\frac{\phi}{2} I} x e^{\frac{\phi}{2} I} = x e^{\frac{\phi}{2} I} e^{\frac{\phi}{2} I} = x e^{\phi I}$. This is how we make rotations using $\Complex$ numbers[^ComplexRotation].
* $e^{\lambda I} e^{\gamma I} = e^{(\lambda + \gamma) I}$. But $e^{A} e^{B} \neq e^{A + B}$ if $A$ and $B$ are bivectors with different attitudes.
* The logarithm (as this operation is called) is the right way you can divide a rotation in multiple parts. If you have the rotation $R$ and you want to apply it in 10 steps (because you are doing an animation in a videogame) then you can do so using: $R = e^{-\frac{\phi}{2} I} = e^{-10 \frac{\phi}{2 * 10} I}$ $= e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I} e^{-\frac{\phi}{20} I}$. If you apply $e^{-\frac{\phi}{20} I}$ 10 times, each rotation will be the same and the final result will be $R$.
### An actual useful factoid
The exponential form of a rotor has the following advantages:
* You need one less number to save. Imagine you are doing rotors in $3D$. A rotor will be 4 numbers: a scalar plus the 3 factors of a bivector, while the exponential form is only a bivector, so you only have to save 3 numbers.
* Adding exponents is wrong, but you can sometimes get away with that in up to $3D$. This is because the addition of bivectors in up to $3D$ will always give a blade, so you will always get *a valid rotation*, just not always the addition of the two original rotations (this is because the addition is conmutative and the geometric product is not. To make un rotation after the other you have to multiply them, so it won't give you always the same result if you change the order). You sometimes are not trying to perform one rotation after the other but **interpolating** between them. So $Interpolation(A, B, \alpha) = e^{A (1 - \alpha) + B \alpha}$ is a very good formula[^slerp].
## Rotors can rotate ANYTHING
By anything I meand anything useful anyway.
What whould be something useful?
* Any k-blade. You can rotate a bivector.
* A given bivector $B = v_1 \wedge v_2$.
* A given rotor $R$.
* $R B \widetilde{R}$ $= R (v_1 \wedge v_2) \widetilde{R}$ $= (R v_1 \widetilde{R}) \wedge (R v_2 \widetilde{R})$.
* The result is as if you had rotated the original vectors and then created the bivector.
* This is SO USEFUL.
* Any k-versor.
* A rotor is a 2-versor.
* You can rotate rotations.
* Composition is done through multiplication though: $R = R_2 R_1$ applies $R_1$ then $R_2$. It's not the same as $R_2 R_1 \widetilde{R_2}$
# The Pseudoscalar and the Dual.
The r-blade that fills all relevant space is sometimes called the *pseudoscalar*. The reason is that is behaves similarly to scalars. For instance in $3D$ the pseudoscalar is the trivector. All trivectors in $3D$ are multiples of the same "unit" trivector the same way all scalars are multiples of $1$. In $2D$ the pseudoscalar is a bivector, in $4D$ it is a 4-vector, and so on.
This leads us to the dual, or the orthogonal complement.[^Siggraph19Break]
The orthogonal complement is that part of the space left to fill from a certain subspace. In $3D$ the orthogonal complement of a vector is a bivector (a plane) orthogonal to said vector. This is the actual reason we are so used to use vectors to define planes: a vector can uniquely define a given plane because it is their orthogonal complement.
Some definitions.
* $I$ is the pseudoscalar.
* $I^2 = \pm 1$
* $A^{\perp} = A \rfloor I^{-1} = A I^{-1}$
* $A^{-\perp} = A \rfloor I = A I = I^2 A$
* $A^{\perp} = A^{-\perp} \iff I^2 = 1$
The reason we have a "dual" and an "anti-dual" operation is that sometimes the pseudoscalar squared is negative, so doing the dual two times will give you the original element negated. There is a good geometric reason for that.
In 2D, the dual of a vector is another vector. It has to be rotated 90º to be orthogonal, but it can go either clockwise or counterclockwise (it doesn't matter, it depends on what $I$ you choose), if you do the same operation twice you won't get the original vector but the one opposite of it.
In 3D the dual of a vector is a bivector. The vector represents the "normal" vector for the plane the bivector represents.
The dual of the bivector $B = \alpha yz + \beta zx + \gamma xy$ with the pseudoscalar $I = xyz$ is $B I^{-1}$ $= (\alpha yz + \beta zx + \gamma xy)\frac{xyz}{xyz^2}$ $= (\alpha yz + \beta zx + \gamma xy)(-xyz)$ $= -\alpha yzxyz - \beta zxxyz - \gamma xyxyz$ $= \alpha x + \beta y + \gamma z$[^zx].
Of course, the dual leads itself to some nice formulas.
* $(A B)^{\perp} = A^{\perp} B^{\perp}$
* $(A \wedge B)^{\perp} = A \rfloor B^{\perp}$
* $(A \rfloor B)^{\perp} = A \wedge B^{\perp}$
* $(A^{\perp})^{- \perp} = A$
* $(A^{\perp})^{-1} = I A^{-1}$
* $R_B(a) = P_{B^{\perp}}(a)$
Orthogonal complements or duals (I'm using both names interchangeably) can be used to move an operation where you are more confortable with. You've probably used it in the past: vector algebra has no concept of bivectors, so when it has to work with planes it does through it's dual, the normal vector. It does weird stuff sometimes, but it behaves nicely most of the time. Now you have the tools to know when it will do what you expect it to do or not (or better yet, you can work with the actual object you need).
# So, what is a quaternion?
Some centuries ago, a guy named Hamilton was trying to find a way to multiply triplets. The reason (or one of the reasons) he was trying so was that he knew that $\Complex$ numbers where able to rotate 2D numbers. Complex numbers were duplets, so if he whould be able to find a way to multiply triplets he whould find a way to rotate 3D numbers. In his own words:
> Every morning in the early part of October 1843, on my coming down to breakfast, your brother William Edwin and
> yourself used to ask me: "Well, Papa, can you multiply triples?" Whereto I was always obliged to reply, with a
> sad shake of the head, "No, I can only add and subtract them."
He did not find any way to multiply triplets, but he did find a way to multiply quadruples, and he called them "quaterions".
And then everybody followed him and sadness and despair followed.
Quaternions work, and I'll show you why, but to really understand **how**, you will need everything I've written so far. And I have already written way to much.
So, what is a quaterion? A quaternion is a number of the form $\alpha + \beta i + \gamma j + \delta k$. Where $i^2 = j^2 = k^2 = ijk = -1$.
How do you rotate a vector?
1. You transform the vector $(\beta, \gamma, \delta)$ to a quaternion $p = \beta i + \gamma j + \delta k$
2. You rotate it with the quaternion $q$ using the sandwitch operation $q p q^{-1}$
The think looks *extremly* like a rotor in $3D$. How do this translate?
The translation that works is
* $I = xyz$
* $i = -Ix = zy$
* $j = -Iy = xz$
* $k = -Iz = yx$
* $ijk = zyxzyx = -yzzxxy = -yy = -1$
* $p = \beta x + \gamma y + \delta z$
* $p^{\perp} = \beta zy + \gamma xz + \delta yx$
* The rotation whould be: $(q p^{\perp} q^{-1})^{- \perp}$
Let me unpack that: You have a quaternion $q$, that represents the rotation you want to do, and the point $p$ you want to rotate. You then get the dual of said point, rotate that plane, and get the anti-dual of the plane to have the rotated point.
This, indeed, works. Mostly because a rotation is an orthogonal transformation (if something was orthogonal before the transformation it remains orthogonal).
I have seen over the years lots and lots of explanations on why quaternion work. None of them have been useful. Mostly because they try to go into the 4th dimension, but in the end because it is all gibberish. The real reason quaternions work is because they are a simplified formula from Geometric Algebra that someone was able to find without the complete picture.
I hope you now have a better idea than most pleople who just use them.
# The (vector algebra) Cross Product considered harmful
This should be a rant similar to the one above. You've probably noticed that the vector algebra cross product is very similar to the geometric algebra outer product.
In fact, one can define it as $(a \wedge b)^{\perp}$. It follows the right hand rule if you use $xyz$ as the pseudoscalar, but you can choose a pseudoscalar that will give you left hand cross products.
It suffers the same problem quaternions do: it only works in 3D (the only dimension where vectors and bivectors are duals). Additionaly the vector you have as a result is what in physics is known as an "axial vector": it does not behave as all other vectors. Why? Because it should have been a bivector all along. But you didn't know better.
As a small demo on what this missconception leads, let me show you the Maxwell equations:
* $\nabla \cdot E = \frac{\rho}{\varepsilon_0}$
* $\nabla \cdot B = 0$
* $\nabla \times E = -\frac{\delta B}{\delta t}$
* $\nabla \times B = \mu_0 J + \mu_0 \varepsilon_0 \frac{\delta E}{\delta t}$
And now with geometric algebra:
* $( \frac{1}{c} \frac{\delta}{\delta t} + \nabla) F = \mu_0 c(c \rho - J)$
As you can see, the 4 equations have been reduced to one. This is because the magnetic field is represented by a bivector and not a vector anymore, while the electric field is represented by a vector, so there is no risk of "mixing" them anymore. In fact here you can see that they form a single field, with a vector part and a bivector part. In fact you can go to the original equations if you take apart each 4 grades $\mathcal{G}_0$ (Gauss' law), $\mathcal{G}_1$ (Ampère-Maxwell law), $\mathcal{G}_2$ (Faraday's law) and $\mathcal{G}_3$ (Gauss' law for magnetism).
# The Geometric Algebra Cross Product
# Meet and join
# The homogeneus space: into the 4th dimension
# The conformal model. 5 non-euclidean dimensions
# Bibliography
[#Lengyel16]: https://foundationsofgameenginedev.com/
[#Chisolm12]: https://arxiv.org/abs/1205.5935
[#GAComSci]: http://www.geometricalgebra.net/
[#Gunn19]: https://dl.acm.org/citation.cfm?id=3328099
# Footnotes
[^complex]: You can make a geometric algebra over $\Complex$ numbers. But it whould be really stupid as anything where you need geometry and $\Complex$ (2D rotations, electron spins) can be done with geometric algebra over the $\Real$. But I'm not going to tell any mathematician that what they do has no real world application. They probably already know and are proud of it.
[^headtail]: it always works. It's the geometrical interpretation that sometimes fails. A velocity is not "longer" than another velocity, just faster. But if you where to draw them as longer and then adding them the trick will work.
[^ortho]: we also need to define the weight of both vectors to not be zero and other stuff. But that whould rise questions I don't want to answer yet.
[^wedgeCat]: I haven't found any translation in catalan, so I'm naming it "producte falca" because I can and you probably don't care.
[^grades]: not really. You can multiply 2 vectors, then multiply 2 other vectors and add the result together and it will be grade 2. I can't find a proper definition of grade that does not need the proper definition of outer product ($a_1 \wedge a_2 \wedge ... \wedge a_n = \frac{1}{r!} \sum_{\sigma}{(sgn \sigma) a_{\sigma(1)} a_{\sigma(2)} ... a_{\sigma(r)} }$, where $\sigma$ is a permutation of 1 through r, $sgn \sigma$ is the sign of the permutation (1 for even and −1 for odd),
and the sum is over all r! possible permutations.[#Chisolm12]) And I really don't want to go there.
[^multivectors]: I haven't told you what a multivector is, right? Well, a multivector is about anything our algebra has. You add a scalar and a vector? Bum! A multivector. You add a vector and a bivector? Multivector. Even a vector *is* a multivector, it does not have to be heterogeneous.
[^TheCliffordConjugation]: There is a third related operation: The clifford conjugation. It happens if you do both a grade involution and a reversion, in any order. The fact that I don't know any practical use and I want to use it's symbol for something else means I had to delete it from history. Do you know where I can find a time machine?
[^TrivectorTrick]: Note that this trick only works in 3 dimensions. In 2 dimensions vectors are always contained within the only bivector, while in 4 and up vectors can be contained, can go into, away or something else entirely.
[^NoInversesYes]: I'm not going to give the formula for the inverse of any blade yet. I need the Scalar product, the good one, and that comes later.
[^DirtyTrick]: Here I'm using a dirty trick. Because $x$ and $y$ are orthogonal the geometric product is equivalent to the outher product. So it anticommutes.
[^ComplexNumbers]: Yup, the algebra of the scalars and the bivector on 2D euclidean space is 100% equivalent to the $\Complex$ numbers. $I$ is totaly indistinguishable brom $i$. That's why you never really need $\Complex$ if you are working with geometric algebra. Odds are any weird trick you are doing with them can be easily expressed through bivectors (or trivectors, that also square to $-1$. 4-vectors don't, though.
[^TheGeomProdIdentity]: I showed that a while ago that $a b = a \cdot b + a \wedge b$
[^RememberScalars]: Note that $x \cdot a$ and $\frac{1}{a^2}$ are scalars and thus conmute with anything.
[^DotDefinition]: $a \cdot b = \frac{1}{2}(a b + b a)$
[^WedgeDefinition]: $a \wedge b = \frac{1}{2}(a b - b a)$
[^DotConmutative]: $a \cdot b = b \cdot a$
[^WedgeAnticonmutative]: $a \wedge b = -b \wedge a$
[^BladeProjection]: You don't know how to project blades into other blades yet.
[^CitationNeeded]: Citation Needed. Really, I'd love to find said theorem, I copied that sentence verbatim from a book and it doesn't tell what theorem it is. I suspect [Hamilton](https://en.wikipedia.org/wiki/William_Rowan_Hamilton) is involved. But it's probably way older.
[^NonEuclideanRotations]: This supposes that a vector squared is a positive $\Real$, wich is only guaranteed for non-zero vectors in euclidean space. If you didn't understand anything I just said it's ok, non-euclidean spaces will come later.
[^ZeroIsEverything]: I probably have forgotten to mention this yet, but 0 is pretty much everything. Is zero a scalar? Sure. A vector? Why not? A bivector? By all posible definions yes. And so on.
[^EulerIdentity]: Euler's identity is $e^i - 1 = 0$, and the formula is $e^ix = \cos{x} + i \sin{x}$
[^NeedLater]: We will use them. Eventually. Their derivation is similar to the one we did for the $\cos$ and $\sin$. The only thing you whould need are the definitions of $\cosh{\phi} = \sum_{n = 0}^{\infty} \frac{\phi^{2 n}}{(2 n)!}$ and $\sin{\phi} = \sum_{n = 0}^{\infty} \frac{\phi^{2 n + 1}}{(2 n + 1)!}$
[^ComplexRotation]: I've choosen to do $x e^{\frac{\phi}{2} I}$ instead of $e^{\frac{-\phi}{2} I} x$ because the product in the $\Complex$ numbers is commutative and geometric product is not. $x e^{\frac{\phi}{2} I} = e^{\frac{-\phi}{2} I} x$, unsurprisingly.
[^slerp]: Better than SLERP, IMHO.
[^Siggraph19Break]: As I'm writting this, a paper has just been released on SIGGRAPH 2019 [#Gunn19]. It pretty much invalidates everything I know because they use a degenerate metric (means there are vectors whose inner product with all other vectors is 0, wich [#Chisolm12] says should not ever happen) and anything multiplied by the pseudoscalar will be zero. I still have to read that (it is long) so I don't know how they define the dual (or even the norm). Everything I wrote here is still useful, though.
[^zx]: This is the reason the 2nd factor of $3D$ bivectors is $zx$ and not $xz$. We could use $xz$ but then the dual whould have a weird sign.