Addendum to A Swift Introduction to Geometric Algebra

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 พ.ย. 2024

ความคิดเห็น • 161

  • @josh-nq2qs
    @josh-nq2qs 11 หลายเดือนก่อน +13

    Though I have a background in mathematics, G.A. has remained an elusive sub-branch that one was seemingly already involved in doing PHD related research, or struggling to understand the nomenclature and/or its application beyond "unification" of multiple areas. I tried to read and minimally understand a Grothendieck paper (about tensors relating to topology)... I was pretty overwhelmed to say the least. After doing your series, i still don't really understand that paper- but i did gain a much better grasp on the subject generally speaking, and also why it should be taught more/ added to math cirrocumulus. Anyway i just wanted to say thank you and what a great job you did putting this whole thing together. I tell all who will listen to check out your channel.

  • @InXLsisDeo
    @InXLsisDeo ปีที่แล้ว +4

    19:34: it's remarkable and beautiful that a reflection is described by letters that are themselves "reflected".

  • @MsNikeNike
    @MsNikeNike 2 ปีที่แล้ว +75

    Loved this explanation of the geometric product, it feels so natural in a way. And now you've gone and made me excited for different flavours of GA. Zero to Geo will be such an amazing resource if you continue as you're doing. I'm excited for this channel's future, and I hope this type of channel becomes more prevelant in the youtube education sphere.

    • @yash1152
      @yash1152 2 ปีที่แล้ว

      what is zero to geo?

    • @yash1152
      @yash1152 2 ปีที่แล้ว +2

      okay, 0:19

  • @SomeMrMindism
    @SomeMrMindism ปีที่แล้ว +3

    My favorite bit about geometric algebra (that is, Clifford algebra) is that tensor algebra = Grassmann algebra + Clifford ideal (where the ideal is simply v tensor v plus its norm squared), so that the geometric multiplication is just the projection of the tensor product on the external algebra, thus recovering in one move both the geometrical insightfulness of the exterior product and the power and simplicity of the tensor, explaining at the same time why the product has both a scalar and a bivector part!

  • @jakman2179
    @jakman2179 2 ปีที่แล้ว +17

    I've done some personal study on Geometric Algebra, even going so far as buying a literal textbook on computer graphics using Geometric algebra. I unfortunately didn't get too far in it because of other projects which I put before it, but your explanation of PGA and CGA have made things so much more clear to me. Thank you.

  • @smorcrux426
    @smorcrux426 2 ปีที่แล้ว +28

    I just now started studying in university and started with a linear algebra course, your videos on vectors were extremely helpful for me to get a better grasp on vectors! Thanks a lot

  • @Michallote
    @Michallote 2 ปีที่แล้ว +7

    I'm mesmerized by the power of geometric algebra. holy cow

  • @muonneutrino_
    @muonneutrino_ 7 วันที่ผ่านมา +1

    Thank you so much for both of these videos (and the Zero to Geo series!!) left me amazed, dazed, and wanting to learn more about this topic :D

  • @milandavid7223
    @milandavid7223 2 ปีที่แล้ว +8

    Last time I watched your GA videos, they inspired me to pick up a book on the topic, but I got a bit overwhelmed by the formal definitions and caveats. Now I'm inclined to give it another go.

  • @hahahasan
    @hahahasan 2 ปีที่แล้ว +8

    Your explanations are incredible. information/sentence is astounding yet somehow very coherent and understandable. I wish there were more resources akin to your brand of teaching when I was an undergrad. Please keep up the phenomenal work.

  • @diribigal
    @diribigal 2 ปีที่แล้ว +7

    I knew very little about PGA/CGA, so this was a cool illustration!

  • @nianyiwang
    @nianyiwang ปีที่แล้ว +2

    The moment you pointed out that ab≠a·b+a∧b I suddenly realized what we've been doing all is just adding elements { (e0), e1, ..., en } to the damn real space and treat them like normal polynomials! Except that we have to simplify their products by the multiplication table, and that they are anti-commutative.

    • @InXLsisDeo
      @InXLsisDeo ปีที่แล้ว

      I didn't understand your comment. Can you explicit it a little ?

  • @JohnSmith-ch9sm
    @JohnSmith-ch9sm 2 ปีที่แล้ว +17

    This has, so far, been a great series and very informative.

  • @pritamroy3766
    @pritamroy3766 2 ปีที่แล้ว +4

    @sudgylacmoe dear sudgy you saved my life and time by these series of video, do you know that ? you are math god to me. I was struggling with clifford algebra so much, but you are the Moses for me. You split the ' sea of difficulties' and help me to pass through from nothing to something. thank you .

  • @AndreaCalaon73
    @AndreaCalaon73 2 ปีที่แล้ว +4

    Eventually the intuitive definition. Combination of components! 👏

  • @mistermanoj3181
    @mistermanoj3181 หลายเดือนก่อน +1

    Thanks @5:20. Its been some time i wanted to have intuitive feel of cross product, like it was easier grasp parallel components in dot product, but how does cross product shoots the result in totally 'wierd' dimension was very unfulfilling feeling.
    Now that i know it results in bivector whose magtitude can still be visualised in same plane and its direction is only for its orientation purpose. I have had my closure.

  • @leocomerford
    @leocomerford 2 ปีที่แล้ว +2

    Thank you for this! You should definitely update the video descrption for the Swift Introduction with a prominent link to this video, and maybe add a pinned comment as well.

  • @stevewhitt9109
    @stevewhitt9109 2 ปีที่แล้ว +2

    Thanks for making this simple enough for even me to understand.

  • @Number_Cruncher
    @Number_Cruncher 4 หลายเดือนก่อน +1

    Great introduction. Thanks a lot.

  • @peterwan816
    @peterwan816 7 หลายเดือนก่อน +1

    You are really a legend. I’m gonna dive deep into it

  • @juandiegoparales9379
    @juandiegoparales9379 2 ปีที่แล้ว +7

    I'll really enjoy this video :D

  • @TheLuckySpades
    @TheLuckySpades ปีที่แล้ว +1

    Man I wasn't expecting to see the stuff we covered in finite geometries to pop up here (e.g. using reflections to get all isometries), always neat to see that stuff pop up
    At least this was in English instead of German

  • @kristoferkrus
    @kristoferkrus 9 หลายเดือนก่อน +1

    I feel like this was more than just an addendum. But whatever; it was very super interesting and I learned a ton of stuff so I'm definitely not complaining! 🤩

  • @gbpferrao
    @gbpferrao 2 ปีที่แล้ว +4

    Incredible job

  • @alphalunamare
    @alphalunamare ปีที่แล้ว +2

    Well that woke me up! It's a whole new way of thinking (I think). It has been a bit fast but then I have never been to quick on the uptake, I tend to have to 'hand write' all the information in order to process it. I find the idea of multiplying circles hilarious and for there to be consistent mechanisms behind it to be absolutely amazing. In some models of our universe they say that an infinite line eventually loops back, so an infinite vector being akin to a circle isn't so far out as it seems at first glance. OK enough waffling .. I am off to your Geo stream :-)

    • @kutay8421
      @kutay8421 ปีที่แล้ว

      'infinite line will eventually loop back' but how? by spanning all the universe or by finding a lower dimensional projection (a line or a curve) and how can you guarantee the second outcome ?

    • @umbraemilitos
      @umbraemilitos ปีที่แล้ว

      ​@@kutay8421Perhaps a compact universe without boundary...maybe a globally curved universe. Something like a torus or sphere might do it.

    • @kutay8421
      @kutay8421 ปีที่แล้ว

      @@umbraemilitos the answer I got from ChatGPT:
      The question: You are on a 1 meter radius Torus. The thickness radius is 10 cm. You choose a random direction and start walking. What is the average path in meters you take before hitting the same starting position again?
      The answer: A = ∞
      The average path you take before hitting the same starting position again is infinite. This result might seem counterintuitive, but it is a consequence of the particular geometry of the torus and the random nature of the walk. It means that, on average, you would wander infinitely far before returning to your starting point. This does not mean you will never return; it just means that there is no finite average distance for the path you take.

  • @ChiangKaiSheksBrother
    @ChiangKaiSheksBrother 2 ปีที่แล้ว +1

    Thanks for this! You answered almost all of the questions I had.

  • @leodorst5841
    @leodorst5841 2 ปีที่แล้ว +6

    Perfectly done, thank you!

  • @chrisdock8804
    @chrisdock8804 2 ปีที่แล้ว +1

    Just thought I would comment that while the outer/tensor product and wedge product may differ in general, b tensor a is also a linear transformation that takes a to ||a||^2 b ! So there is some relationship and I suspect you could make an identification in certain dimensions.

  • @sdsa007
    @sdsa007 2 ปีที่แล้ว +1

    i am sooooo grateful for your videos! they are priceless!

  • @5ty717
    @5ty717 ปีที่แล้ว +1

    Great work Manni

  • @TZPlayer
    @TZPlayer 2 ปีที่แล้ว +10

    Your videos have given many an introduction to the wonderful concepts behind geometric algebra, I think we should all thank you! I am wondering where you learned these concepts and intuitions from, and if you could link those sources for us to take a look at.
    Also, after seeing your videos on GA, I tried learning about geometric calculus. Do you plan on making videos on that in your Geo to Hero series? And do you have any recommendations as to where we can learn it?

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +5

      I've covered introductory textbooks and my journey in learning geometric algebra in questions six and seven of my FAQ: th-cam.com/users/postUgwFByhvEg1_hD_L1Ch4AaABCQ
      As for the concepts and intuitions that I presented in this video, I had to come up with most of them myself. Thus, I am unable to provide you with resources other than this video. Sorry.
      For geometric calculus, I am not planning on covering it in From Zero to Geo. Even back in college I struggled with vector calculus, and that has transferred somewhat over to geometric calculus. I have gotten to the point that I understand the basic concepts, but I am not at the point that I would be comfortable teaching the concepts to others. However, it is definitely a topic that I would like to be able to cover, and I hope that I might be able to learn the concepts well enough to do a series on it after From Zero to Geo is done.

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +1

      I forgot to mention that if you want a textbook for geometric calculus, MacDonald's Vector and Geometric Calculus is a good start. If you want to go hardcore, you can go for Hestenes and Sobczyk's Clifford Algebra to Geometric Calculus.

  • @eamonnsiocain6454
    @eamonnsiocain6454 ปีที่แล้ว +1

    Thanks! This was clear.

  • @tombouie
    @tombouie 2 ปีที่แล้ว +1

    Thks & I likes the way you makes me think and I hates the way you makes my head hurt so-much (keeps-it-up ;).

  • @Dhruvbala
    @Dhruvbala ปีที่แล้ว

    Great video. A few dummy questions:
    4:20 why would we lose associativity?

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว

      Consider three vectors u, v, and w, such that u and w point in different directions. If the product of two vectors was always the product of their lengths, (uv)w would point in the direction of w and u(vw) would point in the direction of u, meaning that they are not equal and associativity is lost.

  • @ywenp
    @ywenp 2 ปีที่แล้ว +1

    Thanks for these video series, the animations are great and this one in particular shows the best intuitive explanation I have seen for what the geometric product is.
    That led me to the following intuition, which is probably tainted by the fact I mostly work with vector spaces in machine learning and natural language processing [1]:
    the geometric product ab "reshapes" (or "splits") the (a,b) pair into two new components: their similarity and their dissimilarity, ie. what they have in common vs. what they don't have in common. This intuition works if a and b are vectors, but as you showed it fails in higher dimensions where geom products can generate more than 2 terms. Do you think that intuition is still somewhat in the right direction, and would there be a way to generalize it so it applies whatever the dimension and whatever flavour of GA you are using?
    [1] In these fields, using vector encodings for bits of data (words for instance) is very common so you can use things like the dot product to measure the similarity between them.
    Eg. if you have a body of text containing documents labeled from 1 to N, some naïve approach would be to consider that you have a N-dim vector space. Each word occurring in this body of text would be a vector, and each component x_i of that vector just counts how many times that word occurs in document i. This approach therefore considers similar words that tend to frequently occur in the same documents (high cross product). Less naïve approaches would try to learn better vector encodings, do dimensionality reduction, etc, but the general logic remains valid.

  • @Ncasiano1022
    @Ncasiano1022 2 ปีที่แล้ว +1

    This was a great resource!

  • @kashphlinktu
    @kashphlinktu 2 ปีที่แล้ว +2

    So, it seems like the complex number plane is some kind of weird variation on a 2D vector space, where we only keep the scalar and the bivector parts? It's weird that a bivector would show up in the solution to polynomials.

    • @05degrees
      @05degrees 2 ปีที่แล้ว

      Complex numbers and quaternions are actually the same as even subalgebras of geometric algebras for 2D and 3D: i, j, k are bivectors, 1 is a scalar. (An even subalgebra contains just elements of even degree and their sums etc..)
      And complex numbers (or quaternions or any algebra like geometric algebras of spaces) *are* a vector space, if you strip multiplication and conjugation from them and leave just addition (and scaling). It’s an extra structure that makes them complex numbers and not just points on a plane. 🙂

    • @Igdrazil
      @Igdrazil ปีที่แล้ว +1

      We dont "only keep the scalar and bivector parts" in 2D. We keep EVERYTHING ! Points (that are modeled by scalars by intersections of lines), vectors (that model oriented finite lines), and oriented surfaces (that are modeled by bivectors who mesure their signed area). That is almost all what is usefull in 2D. But to be complete, in order to unify all curves in 2D, one need also "a point at infinity", i.e. a map that puts the zero vector with infinity, just as in Mercator Projection. The "North pôle" of a circle is mapped to infinity and vice versa. And to build an Algebraic model of such crucial map, we need an object that sends to infinity, and from "there", sends back to zero when applied twice. In other words a non zero object € that squares to zéro : €^2=0.
      And it's easy to find one in 2×2 Real Matrix form : €=[0 0/1 0]. It's in addition, a good candidate for 17th century "infinitesimal", crucial for Calculus without "limits".
      And this crucial object was hiding since ever in the quadratic basic equation :
      x^2 + 1 = 0
      Which ought to be written for transparency :
      x^2 + ?^2 = €^2
      With one (or several) unaware objects that square to +1 without being the trivial arithmetic +1 or -1.
      And again it's easy to find such objects. In 2D there is mainly TWO (up to isomorphism) which can be chose in 2×2 Real Matrix representation as :
      j=[0 1/1 0] and k=[1 0/0 -1]. They do square to +1.
      And last but not least it's easy to find in a similar form the right candidate who solves this quadratic equation that had no Real numbers solution :
      x^2+1=0
      Take i=[0 -1/1 0]. It does square to -1 with of course -i. Which gives TWO solutions to the quadratic equation. And it's obvious from its matrix representation that it's a ROTATION OF 90° anticlockwise. Or equivalently, but more fundamentaly, a composition of two axial symmetries of lines passing through the origin and relative separation angle of 90°÷2=45°. This last point is crucial to understand why rotations of a Vector V of angle 2@ are modeled by sandwich product in GA : exp(@i).V.exp(-@i)
      Where i^2=-1 is the 2D bivectors (or "pseudo scalar) made of the Algebraic product of the two basic unit vectors (1,0) and (0,1).
      In other words this "i" object that squares to -1 and which is in one Optic a +90° rotation in 2D, ought to be more deeply seen as an POSITIVELY ORIENTED UNIT SURFACE modeled by the wedge product of the two unit basic vectors :
      i=(1,0)^(0,1).
      And since those two are orthogonal, their dot euclidian product is zero (scalar), thus :
      i=(1,0)^(0,1)=(1,0)(0,1).
      Nothing is lost, in such representation, about the 90° rotation, and thus the squaring to -1, corresponding to a 180° rotation transforming a vector V in it's opposit -V. This enlights very simply the "mysterious squaring to -1".
      So all this game is in fact Geometry dressed in fancy powerfull Algebraic isosqueleton!
      So by the same logic of establishing a crucial universal dictionary between Geometry and Algebra (in an INTRINSIC manner with GA, and in an EXTRINSIC COORDINATE one with Matrix representations), it's easy to decifer the "mysterious" objects j and k who square to +1, which is the IDENTITY. They are AXIAL SYMMETRIES, who bring back to initial identical figure when applied twice! No more than that...
      In particular, it's easy to check that j flips coordinates of a vector (a,b) into (b,a). It's thus an axial symmetry wrt the y=x first bissectrice axis. The determinant of j, det(j)=-1, confirms that it's almost an isometry, but that flips orientation of figures. It's a axial symmetry in 2D wrt to y=x.
      Same for k. It's an axial symmetry but wrt the x-axis.
      For i, det(i)=+1 confirms that it is a positive 2D linear isometry, which is thus a ROTATION. Translations are not linear, but affine.
      And it's because of this little complication, not having matrix representation, not being linear maps, that we need to upgrade to one extra dimension that builds projective Geometry through homogeneous coordinates, in which translations become linear transformations, get a matrix representation and are upgraded in the same play garden as all other GEOMETRIC maps like symmetries and rotations. In other words, for such unification that happens only in projective Geometry, one need this "point at infinity" which gives a infinitely far center of rotation of all translations that thus become rotations! In other words a straight line is a circle of Infinite diameter !
      And such an upgrading, necessary for unification, is algebraically modeled by this € GA vector who square to zero !
      So in summery, complex numbers are in one sens JUST 2D VECTORS. And what was hidden for millenia was the existence of a VECTOR PRODUCT upon them, in supplement to the known (since Newton time) vectorial space structure.
      Finaly do you still find it "weird" that multiplying TWO things gives even stuff?... We didn't "forget" anything... 😉

  • @slowpoke7785
    @slowpoke7785 4 หลายเดือนก่อน

    It's worth noting that reflection $uvu$ is not a general formula. The general formula for reflection is $uvu^{-1}$ which comes from group theory: an isometry transformation is a conjugacy class. A little more on that can be found in paper "Geometric Clifford Algebra Networks".

  • @grantarneil8142
    @grantarneil8142 5 วันที่ผ่านมา +1

    9:25 did remind anyone else of addition of angular momentum in quantum mechanics?

  • @陳力歐-d2d
    @陳力歐-d2d 2 ปีที่แล้ว +2

    I have been thinking about the connection between geometric product and matrix since the swift introduction video.
    At 12:30, "the geometric product AB is a linear transformation that transforms A to B"
    I think this would be equivalent to this description, if a 2d vector A(a1,a2),is timed by a 2x2 matrix M[m1,m2,m3,m4], we can get a new vector B(b1,b2)=MA, so M is a linear transformation that transforms A to B.
    The eigenvalue of M, λ, can be found by solving the equation,(m1-λ)*(m4-λ)-(m2)*(m3)=0.
    I think, the eigenvalue of M, λ, is equal to the geometric product of A and B.
    For example, the matrix M[0,-1,1,0], it's eigenvalue λ can be found by solving (0-λ)*(0-λ)-(-1)*(1)=0, λ=i
    Given a vector A(1,0), vector B=MA=[0,-1,1,0]*(1,0)=(0,1), the geometric product AB=A⬤B+A∧B=0+i, they are equal.
    I'm quite excited when I came up with this idea, but I don't have many friends in real life who have interest in this topic to discuss with, so I hope somebody can give me some comment.

    • @MagicGonads
      @MagicGonads 2 ปีที่แล้ว

      An n-dimensional square matrix is diagonalisable over C with algebraic multiplicity n, so there can be in general n different eigenvalues for M, how does this fit with your analogy?
      Also, the eigenspace has geometric multiplicity n, so there could be in general n different eigenvectors too
      We could think of that last fact as saying that A and B are related to the eigenvectors of M, but this doesn't make sense once we increase in dimension because the argument of the 2-d linear subspace needs more treatment

    • @陳力歐-d2d
      @陳力歐-d2d 2 ปีที่แล้ว

      @@MagicGonads
      I've forgot the details about matrix diagonalization and geometric multiplicity,I've searched a little bit ,but still have not fully understood ,thus can't fully understand your meaning. But I think the rough meaning you want to express is like, for an eigenvalue, there are many matrices correspond to this value, so we can't know which one is right. But I'll talk about that later.
      I want to add two more idea to my previous idea.
      First, I should divide the result of the product AB by |A|^2, to do some kind of "normalization".
      Second, given two vectors A and B, B=MA, there are many M can do this, but I want to focus on the type of M which can be considered as the summation of "scaling matrix",[λ,0,0,λ], and "rotation+scaling matrix",[λcosθ,λ-sinθ,λsinθ,λcosθ].
      Because the eigenvalue of [a,0,0,a] is a, and the eigenvalue of [bcosθ,-bsinθ,bsinθ,bcosθ] is bcosθ+ibsinθ, their summation is a+bcosθ+ibsinθ.There are many possible combinations of a and b+θ, but it doesn't matter, anyone is OK.
      More simply, if a matrix has eigenvalue a+bi, then it might be [a , -b , b , a], as this one
      www.reddit.com/r/math/comments/d7bxdu/matrix_representation_of_complex_numbers/
      So the my conclusion is, given A and B, B=MA, to find M , first calculate AB, then divide it by |A|^2, and express that complex number in matrix form, and that is a working M.
      I've tried a few, and it seems to be working, though I'm not quite sure how it's working yet, though it might be able to explained by simple algebra.
      For example, vector A(1,3) and vector B(5,7)
      AB=26-8i,|A|^2=10,
      AB/|A^2|=2.6-0.8i
      M=[2.6 , 0.8 , -0.8 , 2.6]
      B1=2.6*1+0.8*3=5
      B2=-0.8*1+2.6*3=7
      Another example,vector A(1,2) and vector B(3,4)
      AB=11-2i,|A|^2=5,
      AB/|A^2|=2.2-0.4i
      M=[2.2 , 0.4 , -0.4 , 2.2]
      B1=2.2*1+0.4*2=3
      B2=-0.4*1+2.2*2=4
      And I agree I don't know how to do this trick on higher dimension, I don't know how to express a+bi in a 3*3 matrix.

    • @MagicGonads
      @MagicGonads 2 ปีที่แล้ว

      @@陳力歐-d2d Like in the video, we ignore the magnitudes involved, only care about the orientation, so no need to correct with dividing by |A| or whatever

    • @MagicGonads
      @MagicGonads 2 ปีที่แล้ว

      @@陳力歐-d2d My point is that a 2x2 matrix has two eigenvalues, for example [0, -1, 1, 0] has eigenvalues i and - i, so there are *two* values to consider, so which of those is ab and how do we assign that, and for 3x3 matrices this becomes three eigenvalues etc.

    • @陳力歐-d2d
      @陳力歐-d2d 2 ปีที่แล้ว

      @@MagicGonads As you said, in the video 12:05 , he dropped the A squared, it seems to be a tiny unobtrusive step for just a few seconds in the video, but it is a very important step for me to make sure the result of the calculation is right, so I still want to emphasize it.
      M1= [0 , -1 , 1 , 0] has two eigenvalues, i and -i
      M2= [0 , 1 , -1 , 0] also has two eigenvalues, i and -i
      We'll choose i for M1, and -i for M2, because M1 is CCW 90 degree rotation and M2 is CW 90 degree rotation. Although the calculation pop out two answers, we have to decide on our own, which one to use.

  • @pacificll8762
    @pacificll8762 ปีที่แล้ว +1

    So great !!!!

  • @barthezslavik
    @barthezslavik 2 ปีที่แล้ว +1

    It was freaking awesome

  • @7177YT
    @7177YT ปีที่แล้ว +1

    I need a recommendation for a good book on geometric algebra. I prefer a more rigorous style, rather than sketched proofs. On the other hand I appreciate insightful examples and applications of theory as well. Do you have any suggestions?

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว +1

      I answer this in question six of my FAQ: th-cam.com/users/postUgwFByhvEg1_hD_L1Ch4AaABCQ

    • @7177YT
      @7177YT ปีที่แล้ว

      @@sudgylacmoe cool! Thx!

  • @TesikLP
    @TesikLP ปีที่แล้ว +1

    The reason why it seems that (equation in the thumbnail) doesn't hold universally could be the way the inner and outer product were introduced in the original video.
    According to their usual definitions (when the geometric product is defined axiomatically)
    a.b := 1/2*(ab + ba) (symmetric part of the product),
    a^b := 1/2*(ab - ba) (antisymmetric part of the product),
    it must hold due to the axioms of linear space that
    a.b + a^b =(definitions)= 1/2*(ab + ba) + 1/2*(ab - ba) =(distributivity over vectors)= 1/2*((ab + ba) + (ab - ba)) =(associativity, commutativity)= 1/2*(ab + ab + (ba - ba)) =(additive inverse)= 1/2*(ab + ab) =(distributivity over vectors)= 1/2*ab + 1/2*ab =(distributivity over scalars)= (1/2+1/2)*ab = 1*ab =(compatibility)= ab.
    Even though the algebra consists of elements of different grades, it still forms a linear space over the field (of real numbers) as a whole.
    In case of a product of two bivectors, the resulting bivector component is antisymmetric and thus part of the outer product.
    I'm not sure how this could be explained intuitively, though.

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว +1

      The inner and outer products are not the symmetric and antisymmetric parts of the geometric product. That's only true for vectors (well and a few other cases). Really I think the focus on symmetry and antisymmetry obfuscates the geometric, inner, and outer products, and it's what confused me when I made my first video.
      Really, the geometric product should be defined first (in a few different possible ways), and then the inner product is the lowest-grade part of the geometric product and the outer product is the highest-grade part of the geometric product. Everything else is just a special case of this fundamental idea.

  • @ReginaldCarey
    @ReginaldCarey 8 หลายเดือนก่อน

    Please more!

  • @marshallsweatherhiking1820
    @marshallsweatherhiking1820 ปีที่แล้ว +1

    The VGA case is intuitive because I already understand the relationship between the dot product and the cosine of an angle between two vectors. The PGA case where you’re moving lines around is just taking your word. Guess I need to read a text to see an actual proof.

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว

      Interestingly enough, with e0 · e0 = 0, the inner product of two lines in PGA is precisely the magnitude of the two lines times the cosine of the angle between them, just like in VGA!

    • @marshallsweatherhiking1820
      @marshallsweatherhiking1820 ปีที่แล้ว

      @@sudgylacmoe The thing I’m not understanding is what you are talking about when you put points and vectors in space. The vectors in terms of the basis e0, e1, e2, e3 are planes in space now. But the planes are overdetermined as there are many ways to have a0 + a1X + a2Y + a3Z = 0 ( I’m using “Vector” = a0e0 + a1e1 + a2e2 + a3e3 ). Also, if vectors are now representing planes, what is representing the single “points” you are drawing and reflecting over the “planes” that are actually vectors. This is what is confusing.

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว

      Planes being overdetermined isn't really an issue. There isn't really anything stopping you from having multiple vectors representing the same object. It's pretty much just like homogeneous coordinates. Also, in n-dimensional PGA, points are n-1 vectors, and it turns out that exactly the same formula for reflecting planes works for reflecting points.
      In general I'll admit that I didn't really explain things in detail here. The idea was just to get a rough overview of these ideas, and not to enable you to actually do PGA yourself.

    • @marshallsweatherhiking1820
      @marshallsweatherhiking1820 ปีที่แล้ว

      @@sudgylacmoe Okay. So the vector in "world space" is the hodge star of the vector in PGA space. The world-vectors are combinations of the pairs e1e2, e2e3, e1e3 in the PGA space. I have yet to see the math, but its at least making some sense now.

    • @angeldude101
      @angeldude101 ปีที่แล้ว

      @@marshallsweatherhiking1820 PGA generally works with normalized subspaces. Multiplying an object by a scalar doesn't change its geometry, but does change its "weight" in a sense, which only affects how it adds with other objects. As for how points are represented, points are treated as point reflections, which in 3D are compositions of 3 orthogonal reflections, or equivalently, systems of 3 linear equations. Hence, they are the outer product of 3 mirrors, ie trivectors.

  • @franciscofernandez8183
    @franciscofernandez8183 2 ปีที่แล้ว +18

    The main reason I see for the discrepancy in views between "From 0 to geo" and "A swift introduction" it's precisely the "from zero" part.
    People who would care about and watch an accesible TH-cam intro to geometric algebra probably aren't at "zero".
    If you don't know some linear algebra, use vectors or at least know how they work then you most definitely won't be into watching a video about a novel product between them.

    • @franciscofernandez8183
      @franciscofernandez8183 2 ปีที่แล้ว +2

      TL DR: The views come from the "geo" part. Not the "zero* part.

    • @heinwol
      @heinwol 2 ปีที่แล้ว +2

      That's totally true! I believe the majority of viewers has a decent understanding of linear algebra, they are simply bored by studying things they already know. We don't watch these videos not because the series are dull at their core, no, we're simply waiting for the interesting part to begin. As for me, I'm really looking forward to this moment!

    • @steffahn
      @steffahn 2 ปีที่แล้ว

      My thoughts exactly.

    • @mastershooter64
      @mastershooter64 2 ปีที่แล้ว

      plus there are already so many videos on youtube about linear algebra

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว +3

      I am still watching them as they come out since they are still somewhat interesting, but I won't go back to rewatch them nearly as much as I do for the Swift Introduction.

  • @05degrees
    @05degrees 2 ปีที่แล้ว +1

    I want to know about CGA more, too!
    And also how do we derive, erm, abstractly why in PGA a vector represents a hyperplane? Naively when one had worked before with projective spaces as is, without geometric product added, one might think that vectors in PGA would represent points (and then of course get stuck in things not working at all). I mean if one uses just an exterior algebra, then it’s pretty natural that vectors, bivectors and so on would represent points, lines and so on. But we couldn’t add the scalar part to this, we somehow need to projective-dualize first for things to work. Why that’s so eludes me... 😑

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +1

      Every geometric algebra has a "dual algebra" where the role of scalars and pseudoscalars are swapped, vectors and pseudovectors are swapped, etc. So thinking of these dual algebras is not a strange thing in and of itself. But that still leaves the question of why we should think to look at the dual algebra for PGA in the first place. Looking into it myself, I couldn't find any direct historical motivation. The oldest thing I could find was somebody just saying "Hey let's use planes". For some more motivation from something more recent, you can check out this video: th-cam.com/video/ichOiuBoBoQ/w-d-xo.html. They also did a longer series on PGA here: th-cam.com/play/PLsSPBzvBkYjxrsTOr0KLDilkZaw7UE2Vc.html

    • @05degrees
      @05degrees 2 ปีที่แล้ว

      Thanks!

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว +2

      The way I picture it is that e₁ means x = 0, e₂ means y = 0, and e₃ means z = 0. PGA also has e₀ which could be interpreted as the equation 1 = 0, which is obviously a contradiction, but it can be added to the other equations giving results that actually have solutions. Taking the wedge product between two vectors is akin to making a system of linear equations, so e₁e₂ means the system of equations {x = 0, y = 0}.
      The other way of interpreting it is that mirror reflection is the simplest transformation and all others can be built up from multiple mirrors. 2 mirrors gives a rotation around their shared intersection line. 3 mirrors gives a reflection around the point where all three intersect. This is often called the "meet" operator, and it basically takes the intersection of two subspaces. It's opposite, the join operator is actually a little more complex conceptually since it also fills in other points between the two, and how it does so can different depending on the geometry. In CGA, the join of two points is a point pair, where as in PGA it's a line. Why a point pair even exists as a primitive object can be unclear until you see how it's simply the intersection/meet between a circle and either a plane or a sphere, or in 2D: the intersection of two circles.

  • @emrekt22
    @emrekt22 2 หลายเดือนก่อน +1

    Came here from the original video, damn

  • @yash1152
    @yash1152 2 ปีที่แล้ว +1

    5:41 "cancel each other" 🙄... oh, the direction

  • @tejing2001
    @tejing2001 2 ปีที่แล้ว

    Regarding reflections in VGA, I think you've actually got it backwards when you say you want to avoid "thinking about normal vectors again". The most correct way to specify a reflection is to specify the thing you want to reflect *along*, not *across*.
    For example, when you reflect across a line in 2D, you don't actually care about the direction along the line at all, the reflection completely happens within the dimensions perpendicular to it, so it's actually *more natural* to describe it as reflection *along* the normal vector to that line, and this is exactly what VGA guides us toward doing.
    Another way to look at it is to consider how a transformation changes due to the mere addition of an uninvolved dimension. If you're reflecting across a line in 2 dimensions, and someone throws a new dimension into the space, suddenly your reflection has to be across a plane to still be the same reflection. If you think in terms of reflecting *along* things, this doesn't happen. The reflection is still along the exact same vector as before. Uninvolved dimensions are actually uninvolved.

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว

      But that's just reflection across a plane. In PGA, it doesn't actually care about the objects used and instead only cares about the transformations. In PGA, a vector is a single reflection; a mirror. By convention, interpreting this mirror as a distinct object is to use the eigenspace. The points that remain unchanged by this reflection are the ones in the plane the mirror lies in, so a vector is interpreted as a plane, at least in 3D. In 2D, this is a line. In CGA, you're also able to use spherical mirrors for inversions, so there vectors can be interpreted as spheres.
      There is no need to invoke "normal vectors" when e₁ isn't a line segment in the x direction, but rather the equation x = 0. The equation is a point in 1D, a line in 2D, and a plane in 3D, but ultimately it is the same vector and same reflection.

    • @tejing2001
      @tejing2001 2 ปีที่แล้ว

      @@angeldude101 My point was only that the representation of reflections in VGA isn't unnatural at all, contrary to 15:47. PGA and CGA are a whole other matter, and I'm not discounting their usefulness :-)

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว

      @@tejing2001 Something about "mirror space" as I like to call it is that it appears to work regardless of the geometry you're working in. Even in VGA, you can represent vectors as mirrors and then reflect across them using the sandwich product. Your only restriction is that the mirrors can only be planes through the origin.
      Reflecting _along_ a line on the other hand seems very unnatural since the reflection has little to do with that line beyond just happening to be perpendicular to it, and that fact _only_ seems meaningful in VGA rather than other spaces and that's entirely because the origin is treated as special.
      If you have a line, the most natural reflection is across it. Such a reflection also acts as a 180° rotation, and pure rotations are bivectors. Ergo, lines should be bivectors. Said reflection is also much less commonly used than a mirror reflection, so it would make more sense for the mirror to be the primitive transformation rather than the line orthogonal to it.

    • @tejing2001
      @tejing2001 2 ปีที่แล้ว

      @@angeldude101 A reflection most fundamentally involves inverting dimensions, so it's natural to specify it by the dimensions that are inverted, not the dimensions that are untouched.

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว

      @@tejing2001 You're thinking in terms of dimensions? So like directions? Generally geometric algebra focuses more on subspaces than directions. Not what directions are inverted, but what points are inverted. The points that are inverted are everything not lying on the plane, which includes all but 1 points on the line, as well as many points not on the line, but that doesn't form a subspace. The opposite, the points not inverted, do however form a connected subspace.
      Specifying the dimensions that get inverted feels like that fits well with a matrix though. [[1, 0], [0, -1]] does flip the y-coordinate specifically and not the x-coordinate, but geometric algebra generally doesn't concern itself with the actual coordinates of something.

  • @FlyGuyGreeny
    @FlyGuyGreeny 4 หลายเดือนก่อน +1

    Ohhh.... Wow 😮❤

  • @blinded6502
    @blinded6502 2 ปีที่แล้ว +1

    Aren't multivectors tensors tho? They just don't allow elements, that have 2 or more equal indices.
    So outer product isn't all that different between the two.

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +1

      Multivectors are not tensors. While one of the constructions of geometric algebra involves defining the geometric algebra as a suitable quotient of the tensor algebra, it's best not to think about multivectors in this way, just like how we shouldn't think of the real numbers as being equivalence classes of Cauchy sequences.

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว +1

      From what I can tell, the GA outer product is like the tenser outer product but with the diagonal zeroed out since those components correspond to the inner product. In this sense the full tensor outer product is more closely related to the full geometric product than just the GA outer product.
      And yes, it would be true to say that multivectors, or at least k-vectors are tensors in the same way it's true to say that linear transformations are matrices. Matrices are just one extremely powerful way to express linear transformations, but they're also something of a sledgehammer solution. Geometric Algebra has a narrowed focus, so it can't represent everything a matrix can, but what it can represent tends to be much clearer than the matrix form.

    • @malcolmkealey9018
      @malcolmkealey9018 2 ปีที่แล้ว

      @@angeldude101 If you want to define multivectors in terms of tensors, I believe the correct way is to define the GA outer product to be the antisymmetrized tensor product.
      I.E. If we write the tensor product of two tensors A and B as T(A,B) (since I can't put the standard notation in a youtube comment), then the wedge product of A and B is = T(A,B) - T(B,A)

    • @05degrees
      @05degrees 2 ปีที่แล้ว

      There is one interesting thing that antisymmetric tensors taken as is (from the vanilla tensor algebra) aren’t naturally isomorphic to elements of the antisymmetrized tensor algebra. It appears in concrete terms when people define antisymmetrization of tensors with or without dividing by n!, and that n! then appearing elsewhere, with no natural way to decide once and for all where does one hide it and where it’s a bad taste. So for the sake of representational gods we need not conflate elements of the exterior algebra and antisymmetric tensors, and we even do need to be careful with thinking p-vectors from exterior algebra are the same as p-vectors from geometric algebra (or any Clifford algebra): there are details in the dark too! I don’t particularly remember what’s up right now though.
      But our salvation is that we can define all these algebras by factorizing the tensor algebra and this factorization will make things in a right manner. To make the exterior algebra, we treat all tensors v ⊗ v (where v is a vector) equivalent to 0 (this is the “antisymmetrization” of the whole algebra). Now a ∧ b is represented by any tensor equivalent to a ⊗ b, like a ⊗ b or for example a ⊗ (b + a) − 2 b ⊗ b ⊗ a. Later we just use ∧ and its emerged properties as primitives and forget about all infinitely many equivalent tensors. Likewise, for geometric algebra or any Clifford algebra at all, we treat all tensors v ⊗ v as equivalent to the length of v squared, a scalar, and not just zero. Then we get from this all niceties we love. Again each thing has infinitely many representatives but we can just forget about underlying tensors and use our new notions and their properties as atomic.

  • @josemanuelbarrenadevalenci653
    @josemanuelbarrenadevalenci653 ปีที่แล้ว

    I think the previous introducción is correct in G³.

  • @saudmolaib2764
    @saudmolaib2764 2 ปีที่แล้ว

    I think I see a mistake. At around 11:28 it's stated that the product of three vectors in two dimensions is always another vector. However, this seems fishy. The product of two vectors a and b is the sum of a scalar and a bivector. If you take the product of this sum with a third vector, we get a vector from the scalar part, ie (a /dot b)c, but you get a scalar from the bivector part since we are in two dimensions meaning the third vector lies in the plane spanned by a and b. Therefore, in general the product of three vectors in two dimensions is the sum between a vector and a scalar. This scalar part is only zero if two of the three vectors are in the same direction.
    Let me know if you agree with me or if you see a flaw in my reasoning!

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +2

      The problem with your reasoning is that a bivector times a vector inside that bivector is a vector, not a scalar. As an example, e1 * e1e2 = e2. If you want to verify that the statement is correct for any three two-dimensional vectors, just calculate the general product of three vectors: (a e1 + b e2) * (c e1 + d e2) * (f e1 + g e2). If you fully distribute and simplify, you'll see that you are left with a vector.

    • @saudmolaib2764
      @saudmolaib2764 2 ปีที่แล้ว

      @@sudgylacmoe Ahh, I see! Thanks for the explanation!

    • @marshallsweatherhiking1820
      @marshallsweatherhiking1820 ปีที่แล้ว

      @@saudmolaib2764 It is true in any number of dimensions. As long as v3 is in the same plane as v1 and v2. If v1 and v2 are parallel, then their product is only their dot product, i.e. a scalar, which makes a vector when you multiply it by the v3. If v1 and v2 are not parallel, then the v3 can be written as a linear combo of the v1 and v2, since it is in the same plane. Substitute the linear combo and distribute an you get a*v1*v2*v1 + b*v1*v2*v2. Both terms are vectors.

  • @TheFireminerhd
    @TheFireminerhd ปีที่แล้ว +1

    Why do we define the product of two parallel vectors to be the product of their lenghts? In my head it would make more sense to get a vector with the length of the product of the two lengths. So 5e_x * 6e_x =* 30e_x and not 5e_x * 6e_x = 30. There is propably a good reason why we would define it as a scalar, but what is the reason? 3:40 (and thanks for the great video)

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว +1

      Which vector would e_x * -e_x be? On one hand you might think it's e_x * e_x * -1 = e_x * -1 = -e_x, but then on the other hand e_x * -e_x = --e_x * e_x = -1 * -e_x * -e_x = -1 * -e_x = e_x, so we have both e_x * -e_x = -e_x and e_x * -e_x = e_x, which is impossible. We would have to lose associativity and/or distributivity to make this work, and we don't want that.

    • @TheFireminerhd
      @TheFireminerhd ปีที่แล้ว +1

      @@sudgylacmoe thank you! That was the simple contradiction I was looking for

    • @TheFireminerhd
      @TheFireminerhd ปีที่แล้ว

      @@sudgylacmoe after looking at it for a while, I have a question about the second statement.
      Wouldn't -e_x * -e_x result in +e_x?
      In longer: -e_x * - e_x = -1 * e_x * -1 * e_x = -1 * -1 * e_x * e_x = e_x * e_x = e_x

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว

      But you yourself said that the product of a vector with itself should be another vector pointing in the same direction.

    • @TheFireminerhd
      @TheFireminerhd ปีที่แล้ว

      @@sudgylacmoe ah i dont want to waste your time if I cant explain properly what I mean.
      I meant as an equation:
      (a * e_x) * (b * e_x) = a * b * e_x
      a,b are scalars. Does this make more sense?

  • @kutay8421
    @kutay8421 ปีที่แล้ว +1

    Why DON'T you make a GEOMETRIC CALCULATOR ?
    So we may play and tweak with it and observe the results

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว +1

      Two reasons:
      1. A geometric calculator is highly non-trivial to make.
      2. People have already done similar things, like here: bivector.net/tools.html

  • @alkalait
    @alkalait 2 ปีที่แล้ว +1

    @15:10 you show that a reflection around a vector is a half-way rotation. Why is this an issue? Then you go on to show "a few ways to fix this". It's not very clear to me what needs fixing. Thank you.

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +1

      The problem is that a half-way rotation is not a three-dimensional reflection. Three-dimensional reflections go through planes, not lines. So the issue is that the reflection formula that worked in two dimensions did not produce a reflection in three dimensions.

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว +3

      It's less of a problem with geometric algebra and more a problem with generalizing reflections to higher dimensions. Just looking at the complex plane shows why this is the case. Traditionally, -1 is a reflection around the origin, but since there's no halfway point to represent a transformation that when done twice gives a reflection, a new axis had to be added and suddenly the reflection has become a rotation, and the halfway point is a 90 degree rotation. The operation to reflect a real number is completely different from the operation to reflect a higher dimensional object and is also indistinguishable from a rotation in that higher dimension.
      With PGA, people insisted that reflection across a "vector" stay as a reflection in higher dimensions rather than becoming a rotation, even if every reflection is indistinguishable from a rotation through a higher dimension.
      Edit: I have since changed my stance on this. I now accept vectors as mirrors rather than directions, and that a rotation around a direction is to be a bivector (or composition of 2 mirrors) regardless of what that rotation is. The part about reflection being rotations through a higher dimension is however something I still hold and it relates to the even subalgebra of a geometric algebra. When going up a dimension, rather than simply adding a new basis vector, it's possible to encode the old algebra as the even subalgebra of the higher dimensional algebra. This explicitly turns reflections into rotations through the higher dimension.

  • @daigakunobaku273
    @daigakunobaku273 9 หลายเดือนก่อน

    Eh, I don't think you can produce any conformal transformation in 2D using a finite number of generators, because the 2D conformal group is infinite-dimensional, unlike in higher dimensions, where the dimension of the conformal group is finite.

  • @hiiistrex2838
    @hiiistrex2838 2 ปีที่แล้ว +3

    When will you stop blowing my mind

  • @happmacdonald
    @happmacdonald 2 ปีที่แล้ว +2

    Learns VGA and CGA
    Bookmarks chapters to study EGA and sVGA.

  • @eddyhedy5173
    @eddyhedy5173 2 ปีที่แล้ว +1

    niiiice

  • @yoavboaz1078
    @yoavboaz1078 2 ปีที่แล้ว

    I didn't understand why "things that are parallel"cancel. What happened to the inner product? When to parallel vector are multipied it gives a scaler not 0...

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว

      By cancel, I mean their directions cancel. The magnitudes of those directions multiply.

    • @yoavboaz1078
      @yoavboaz1078 2 ปีที่แล้ว

      @@sudgylacmoe but shouldn't it be a multivector? Because the scaler is still there

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว

      I'm not sure what your question is. When you multiply two parallel vectors, their magnitudes multiply and their directions cancel, leaving a directionless magnitude, which is a scalar. Scalars are also multivectors, so the result of the product is a multivector.

    • @yoavboaz1078
      @yoavboaz1078 2 ปีที่แล้ว

      @@sudgylacmoe Ok I think I understand

  • @HamishArb
    @HamishArb 2 ปีที่แล้ว

    Love your videos on GA, but 1 problem with your bit about the formula being wrong. Dot product is only defined initially for vectors (and numbers also work if you want) as inputs (unsure about outer product). For multivectors a dot b = 1/2 (ab+ba) and a outer b = 1/2 (ab-ba). Which yes is kind of a circular definition, but saying that ab != a.b + a^b is wrong afaik. It is built up by defining dot product and outer product and multiplying by scalars on simple k-vectors (simple meaning with no pluses and all basis vectors are perpendicular, ie. (e_i).(e_i)=1 and e_i^e_j=-e_j^e_i and otherwise 0) - the distribution law with geometric multiplication can then extend it to all multivectors using the simple product of k-vectors and ab=a.b+a^b (ie. giving (e_n)^2=1 and e_i*e_j=-e_i*e_j) and how we multiply by a scalar - this then allows the generic redefinitions of dot and outer product. For example (e1e2+e2e3+e3e4)(e3e4) = e1e2e3e4+e2e4-1 and (e3e4)(e1e2+e2e3+e3e4) = e1e2e3e4-e2e4-1: therefore dot (the symmetric part) = e1e2e3e4-1 and outer (the asymmetry part) = e2e4. Note: I'm not an expert (all info from the internet) on this so if I'm wrong, please tell me where I'm wrong, thanks.

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +1

      You just reminded me that there's another formula I should have said was not always true: a · b = 1/2 (ab + ba) and a ∧ b = 1/2 (ab - ba). Those are not always true as well. The standard definition of the inner and outer products is the lowest and highest grade part of the geometric product as I described here.

    • @HamishArb
      @HamishArb 2 ปีที่แล้ว

      @@sudgylacmoe okay, that makes more sense. Thanks :)

  • @eternaldoorman5228
    @eternaldoorman5228 11 หลายเดือนก่อน

    Aw c'mon man, you're just makin' this shit up as ypu go along! 🙂

  • @Bolpat
    @Bolpat 2 ปีที่แล้ว +1

    As you made a video about spacetime algebra (STA), do you plan making videos about projective (PGA) and conformal geometric algebra (CGA) as well? The basis vector of PGA that squares to 0 is especially interesting because the zero vector already does that. This is in contrast to, e.g., the complex numbers that extend the real numbers by i, which has a property not found in the real numbers and thus does not need that much explanation _why_ we do that, adding an object with a property already present likely has some interesting reasoning behind it. I’ve looked into en.wikipedia.org/wiki/Dual_number but it did not really make me feel I’ve learned something.

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว

      I'll be covering them eventually in From Zero to Geo.

  • @robfielding8566
    @robfielding8566 2 ปีที่แล้ว

    oh my god. :-). This is why I prefer using explicit coordinates for as long as possible.

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว

      If you prefer using explicit coordinates, then GA is probably not for you. One of the biggest focuses in GA is representing various transformations abstractly with pure algebra without specifying the structure of the objects you're transforming or even transforming with. Pretty much all formulae that aren't constructing a specific vector from coordinates in the first place are done without mentioning coordinates, or even dimension. a(b) = aba¯¹ is some number of reflections with the number of reflections equal to the highest grade of a. I have no idea what a looks like internally, the dimension the objects are in, or even the shape of the reflections, but I can say with relative certainty that they are reflections.

    • @rrr00bb1
      @rrr00bb1 2 ปีที่แล้ว

      @@angeldude101 I used to think this until I started to use it in practice; in code. You can fool yourself into thinking you got it when you write it on paper. When you implement it in code, you discover that you can just implement a function to GeometricProduct(a,b) once in code, and there are no special cases at all to implement when it's done internally with the coordinates. Then the definition of "is this is a vector" is all a matter of which values are non-zero.
      When you read GA books and stop multiplying just vectors, for formulas are actually kind of complicated. Where you benefit from using special cases is when the floating point precision is an issue, and you use all the special cases to nail down when you KNOW which components must be exactly zero, and therefore the types are exact.
      For example, multiplication of two 3D multivectors can leave you with a trivector that is very slightly non-zero, because of floating point. It is kind of complicated to do the code such that round-off errors ONLY happen in the values that are not guaranteed to be zero.

    • @rrr00bb1
      @rrr00bb1 2 ปีที่แล้ว

      Implement GeometricProduct of two arbitrary multivectors in code to see what I mean. If you want your CODE to never mention coordinates, you implement this one function that multiplies two multivectors internally with the coordinates. It's so gloriously simple because all of GA is in this one tiny function. Then, you define two vectors a and b in 3D...
      // You only see coordinates when
      // when you actually specify WHERE a vector is,
      // and you are not just playing around on paper
      // the operator * is the general Multivector
      GA.Dimensions(3)
      a := vec3(a1,a2,a3)
      b := vec3(b1,b2,b3)
      p := vec3(b1,b2,b3)
      // do a rotation
      r := a * b * p * Inv(b) * Inv(a)
      What do you think happens? The scalar part comes out to be like 0.0000000001, and the trivector part is like -0.0000000001.
      It's really only IsVector(r) when:
      - scalar == 0
      - trivector == 0
      - all 3 coefficients of bivectors are == 0
      How do you fix this? You basically have to write code that doesn't just have one GAMultiply function. You need a complicated compiler that looks at which coefficients are already exactly zero, and pick the best special case way to multiply the two multivectors so that the parts that MUST be exactly zero STAY exactly zero. Unless you want to demand completely symbolic calculations (ie: rational numbers, with symbolic roots and symbolic special numbers... answers like .... (235/3939 - pi^-2*e) .... with any kind of error in the result... some of the "not necessarily zero" values will end up being zero. Even tiny errors will make a big difference when you are dividing.
      This whole thing about coordinate-free is only true when you are doing fictional calculations on paper. As soon as you implement code to actually manipulate vectors with realized values; you dont go all the way back to matrices, but the implementation gets complicated very quickly to deal with the fact that just converting everything to approximate numbers is an issue.

    • @rrr00bb1
      @rrr00bb1 2 ปีที่แล้ว

      For example: at the end of a GA calculation that yields two vectors; how do you define "these two things are parallel", or "these two things are orthogonal"? in the face of approximate answers? When you try to do it symbolically, the number representations can get huge (ie: like when you sit in a loop collecting statistics for an hour and a pair of vectors come out of it... naive stddev calculation can be catastrophically wrong with naive floating point, or numbers too huge to compute with symbolically).

    • @rrr00bb1
      @rrr00bb1 2 ปีที่แล้ว

      @@angeldude101 You DO need to know the dimension, or else answers will come out wrong; symbolically while avoiding coordinates as well. GA is straightforward when you are talking about multiplying vectors. But while you are multiplying arbitrary multivectors, every possible combination is possible.
      a * b * p * inv(b) * inv(a).
      What drives me most crazy about standard GA books is that they talk about it being "object-oriented", and it's almost never clear how you would take a GA expression that is a function of a bunch of inputs, and compile it down to a function that computes the answer when given realized values as input.

  • @ginaherrera3166
    @ginaherrera3166 2 ปีที่แล้ว

    What software are you using to make these?

    • @sudgylacmoe
      @sudgylacmoe  2 ปีที่แล้ว +1

      This is covered in question 2 of my FAQ: th-cam.com/users/postUgwFByhvEg1_hD_L1Ch4AaABCQ

    • @ginaherrera3166
      @ginaherrera3166 2 ปีที่แล้ว +1

      @@sudgylacmoe thanks!

  • @BlueGiant69202
    @BlueGiant69202 2 ปีที่แล้ว

    Textbook?!! Bite your 19th century tongue! Where is the interactive software for my Physics as Modeling Workstation?? How about a Jupyter Notebook to start rather than a textbook? How about a suite of Brilliant Geometric Algebra exercises?

  • @hearteyedgirl
    @hearteyedgirl 4 หลายเดือนก่อน

    while the video says "inside" and "outside", it can be understood as spam

  • @alejrandom6592
    @alejrandom6592 ปีที่แล้ว +3

    5:20 inside who?

  • @ucngominh3354
    @ucngominh3354 10 หลายเดือนก่อน +1

    hi

  • @ZedaZ80
    @ZedaZ80 2 ปีที่แล้ว +1

    Hnnng thanks

  • @deankaraniya7422
    @deankaraniya7422 2 ปีที่แล้ว +1

    Dango

  • @sibylpatterson1073
    @sibylpatterson1073 2 ปีที่แล้ว

    😜 𝓟Ř𝔬𝓂𝔬𝐒ϻ

  • @deltalima6703
    @deltalima6703 2 ปีที่แล้ว

    This video makes me very sad. I wish all videos went at this pace, but sadly they all spend forever on trivial junk, then throw in a bunch of worthless jargon and turn into a mess.
    Does anyone know of any other math/physics topics in other videos shown at this same super fast pace?

    • @alphalunamare
      @alphalunamare ปีที่แล้ว

      You could try 'Physics with Elliot' .. that's always good for a binge :-)

  • @bobtheblob728
    @bobtheblob728 2 ปีที่แล้ว

    acab at 11:48

  • @void2258
    @void2258 2 ปีที่แล้ว +2

    I have found the last few episodes of Zero to Geo too basic. I realize you are going for rigor, but spending entire videos on defining basis and spaces is too much. This is probably a bias on my part since I have gone all the way through math courses and so already know this stuff from calculus and linear algebra classes, but I keep sitting there going 'come on get to the new stuff'.

  • @Igdrazil
    @Igdrazil ปีที่แล้ว

    Unfortunately your reasoning is CIRCULAR! Since you talk about « perpendicularity »…FROM WHICH you « define » the « dot product ». This is chieldishly CIRCULAR… You are just building a sand castle of illusion from empty « definition »…

    • @sudgylacmoe
      @sudgylacmoe  ปีที่แล้ว +1

      In my videos, I make understanding the primary concern, not rigor. I have developed a fully rigorous construction of geometric algebra here that doesn't have this issue (it starts with explicitly asking for a symmetric bilinear form): github.com/sudgy/math-from-nothing/tree/master/src/Algebra/Geometric.

    • @Igdrazil
      @Igdrazil ปีที่แล้ว

      @@sudgylacmoe Your answer sounds strange since it confuses rigor and technicality. You can skip some technicality to wide the audience but skipping rigor to fake « understanding » sounds rather weird. Especially in a subject of pure mathematics. And even more on a thematic which is confusing many people. Some who don’t understand because they cant open their mind to a new setting. Some who believe they understand by superficial or false reasonings. Making circular reasonings on such a crucial starting key stone as orthogonality, is not an « understanding gift », but a poisonous shining apple that will lead to confusions and extremely bad habit of weak reasoning and false logic. Especially when you use hypothetico deductive rhetoric which suggest that the rigourous logic chain holds. It doesn’t. And it hides moreover the need to think deeply about this tricky foundation of orthogonality that is not essentially solved by « defining a bilinear form » etc. Mathematical « jargon » may help but is not the core of a problem. The core of it is : what is the starting point. What are points, vectors, etc. How do you define and get the OBJECTS. This is problematic. A second level of approach may consider TRANSFORMATIONS on objects. And paradoxally the best is to define them before defining the object they act on, and use their inner property to define the objects as invariants for those transformations, etc. In other words, the challenge is to stay rigorous without hiding behind jargons, by clear and constructive ideas that turn Pb into gold.