Linear Transformations -- Abstract Linear Algebra 8

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ส.ค. 2024
  • ⭐Highly Suggested Linear Algebra books⭐
    Linear Algebra, an introduction to abstract mathematics: amzn.to/3rkp4Wc
    Linear Algebra Done Right: amzn.to/3vaEBKV
    The Manga Guide to Linear Algebra: amzn.to/3HnS59o
    A First Course in Linear Algebra: linear.ups.edu/
    Linear Algebra Done Wrong: www.math.brown...
    ⭐Support the channel⭐
    Patreon: / michaelpennmath
    Merch: teespring.com/...
    My amazon shop: www.amazon.com...
    ⭐my other channels⭐
    Main Channel: / michaelpennmath
    non-math podcast: / @thepennpavpodcast7878
    ⭐My Links⭐
    Personal Website: www.michael-pen...
    Instagram: / melp2718
    Randolph College Math: www.randolphcol...
    Research Gate profile: www.researchga...
    Google Scholar profile: scholar.google...

ความคิดเห็น • 30

  • @schweinmachtbree1013
    @schweinmachtbree1013 2 ปีที่แล้ว +29

    29:38 this example is actually misleading, and the reason behind why is super subtle and interesting. The problem is that there is a difference between *polynomials* and *polynomial functions:* in the ring of polynomials k[X] we define two polynomials p(X) and q(X) to be equal iff all their coefficients are equal, whereas in the ring of polynomial functions (which doesn't really have a standard notation so I'll call it poly_k[x]), two polynomial functions f: k -> k and g: k -> k are equal iff their values agree at all points in k (since they're functions). Over infinite fields this distinction doesn't affect things because it is not hard to show that two polynomial functions are equal if and only if the corresponding polynomials are equal, so we have k[X] ≅ poly_k[x] (by the obvious isomorphism). However over *finite* fields, e.g. k = *Z* /p *Z* for a prime p, this is not the case - if you know a bit of number theory then you will know from Fermat's little theorem that a^p ≡ a (mod p) for all integers a, i.e. a^p = a in *Z* /p *Z* , and hence the two polynomials p(X) = X^p and q(X) = X are different polynomials (since their coefficients are different) but the corresponding *polynomial functions* f, g: *Z* /p *Z* -> *Z* /p *Z* given by f(x) = x^p and g(x) = x are the same function, so f = g. That is, the obvious map k[X] -> poly_k[x] is not injective when k is infinite.
    Therefore Michael's proof T_a(f+g) := (f+g)(a) = f(a) + g(a) = T_a(f) + T_a(g) (and for scalar multiplication T_a(c.f) := (c.f)(a) = c.f(a) = c.T_a(f)) proves that T_a is a linear map from polynomial *functions* to k, i.e. that T_a: poly_k[x] -> k is a linear map. *However* it does not prove that the analogous evaluation map ε_a: k[X] -> k taking the polynomial p(X) = \sum_i c_i X^i to \sum_i c_i a^i ϵ k is a linear map - it is easy to show that ε_a is a linear map (i.e. a vector space homomorphism), but something alarming happens if we also consider multiplication of polynomials/polynomial functions. That is, we consider k[X] and poly_k[x] as rings instead of vector spaces, and we need to consider a coefficient ring R instead of a coefficient field k. In this situation it is of course true (and easily shown) that T_a(fg) = T_a(f) T_a(g), but it is *not true* in general that ε_a(p(X)q(X)) = ε_a(p(X)) ε_a(q(X)) - that is, evaluation of polynomials is not always a ring homomorphism!! but how can this be!? - the problem arises when the coefficient ring is non-commutative: taking p(X) = \sum_{i=0}^n b_i X^i and q(X) = \sum_{j=0}^m c_j X^j where b_i, c_j ϵ R, one has:
    ε_a(p(X)q(X)) = ε_a((\sum_{i=0}^n b_i X^i)(\sum_{j=0}^m c_j X^j)) := ε_a(\sum_{i, j} b_i c_j X^{i+j}) := \sum_{i, j} b_i c_j a^{i+j}
    while
    ε_a(p(X)) ε_a(q(X)) = ε_a(\sum_{i=0}^n b_i X^i) ε_a(\sum_{j=0}^m c_j X^j) := (\sum_{i=0}^n b_i a^i)(\sum_{j=0}^m c_j a^j) = \sum_{i,j} b_i a^i c_j a^j,
    so the problem is that in general we don't have \sum_{i, j} b_i c_j a^{i+j} = \sum_{i,j} b_i a^i c_j a^j - we see that things will work when R is commutative, or more generally when a is in the center of R (i.e. when a commutes with all elements of R), so evaluation maps *at central elements* are ring homomorphisms. For a concrete example of when ε_a is not a homomorphism, take your favourite non-commutative ring R (e.g. the quaternions or a matrix ring) and fix two non-commuting elements a and b. Then one has ε_a(X × b) ≠ ε_a(X) × ε_a(b) (i.e. take p(X) = X and q(X) = b).
    To summarise: There is a subtle difference between polynomials and polynomial functions. Evaluation of polynomial functions is always a homomorphism. Evaluation of polynomials is a vector space homomorphism. But evaluation of polynomials is not a ring homomorphism unless the element one is evaluating at is central (i.e. an element of the center Z(R) of the coefficient ring R). This all may seem like abstract nonsense, but it is relevant whenever one is working with polynomials over a non-commutative field/ring; for example it comes up in several proofs of the well known Cayley-Hamilton theorem.

    • @StanleyDevastating
      @StanleyDevastating 2 ปีที่แล้ว

      Pretty cool note! It's easy to adapt the proof in the lecture, just write f(x) = sum a_i * x^i and T_alpha (f(x)) = sum a_i * alpha^i with i ranging over all natural numbers. Then it all works out without needing to use functions.

    • @Unidentifying
      @Unidentifying ปีที่แล้ว

      insanely unreadable text sorry

    • @malawigw
      @malawigw 9 หลายเดือนก่อน

      ​@@Unidentifyingnot if you have good IQ

    • @pauselab5569
      @pauselab5569 5 หลายเดือนก่อน

      if you have ever seen a french mathematician talk about polynomials, you know they will spend around a hour just telling you over and over again why polynomials P(X) are algebraic objects that form an algebra or a ring depending on how you look at it and polynomial function are functions from a constant to a constant. No idea why they have to put that much emphasis when the only time it matters is in finite fields.

    • @epsilia3611
      @epsilia3611 หลายเดือนก่อน

      @@pauselab5569 I'm french and wth did I just read lmao that's racist 😩

  • @iooooooo1
    @iooooooo1 2 ปีที่แล้ว +4

    Nice video.
    To prove a transformation is a linear transformation, I quite like the equivalent condition that T respects arbitrary linear combinations. More formally, given arbitrary a1, a2 in F, and arbitrary v1, v2 in V, T(a1v1 + a2v2) = a1T(v1) + a2T(v2). To show that it's equivalent might be a nice exercise for a student at the level of this class.

    • @schweinmachtbree1013
      @schweinmachtbree1013 2 ปีที่แล้ว +4

      the other equivalent condition T(v1 + a2.v2) = T(v1) + a2.T(v2) is also quite nice (although it is less symmetric) because v1 + a2.v2 is exactly the parametric equation for a line in a vector space, so the condition says precisely that "T maps lines to lines".*
      (*the direction vector of the image line is the image of the direction vector of the original line, and the starting point of the image line is the image of the starting point of the original line)

  • @nabla_mat
    @nabla_mat 2 ปีที่แล้ว +3

    Thanks MP for this new channel!
    I liked the idea to have this material in a separate channel, makes it easier to look for what you want/need to see. I’m also liking a lot the thumbnails, informative and well designed. Congrats to you and your team!

  • @goodplacetostop2973
    @goodplacetostop2973 2 ปีที่แล้ว +4

    34:31

  • @johningles1098
    @johningles1098 2 ปีที่แล้ว +2

    26:40 I can't help but notice that here injectiveness is the same as a non-zero determinant of the transformation matrix. Does this work in general for all linear transformations of any dimension?
    EDIT: Never mind, I found that the determinant is non-zero when the linear map is an isomorphism which brings injectivity with it.

  • @m44lshannon
    @m44lshannon 2 ปีที่แล้ว

    For any who have arrived at this via the playlist, it is missing lesson 6 and 7 (in case like me you didn't notice and were confused).
    Hopefully the channel fixes the playlist at some point. Absolutely loving this series though.

  • @oliverdixon4660
    @oliverdixon4660 2 ปีที่แล้ว +1

    Is the first linear transformation shown in the first warm-up exercise non-surjective? Since it's possible to choose some _f_ in _k[X]_ with degree _n + 1_ , the _deg(f) + 1_ coefficients required could not be generated with some _n+1_-tuple from _k^{n+1}_.

  • @lexinwonderland5741
    @lexinwonderland5741 ปีที่แล้ว

    Wait, wouldn't the first warmup example be a bijection? Replacing each x^k with the unit basis vector e_k should turn the sequence of coefficients of the polynomial into a geometric vector one-to-one and vice versa, and it spans the entire space of possible polynomials of degree n since it can be any combination of coefficients?

  • @alexbush9250
    @alexbush9250 2 ปีที่แล้ว +1

    Q: for the homework problem T:k[x] -> k[x] when I check the kernel I get a differential equation of the form y’-xy=0. Since this has no solution over polynomials, can I say that this transformation is injective?

    • @BiscuitZombies
      @BiscuitZombies 2 ปีที่แล้ว +1

      It does have a solution over the polynomials, just a zeroth order one.

    • @yakov9ify
      @yakov9ify 2 ปีที่แล้ว

      @@BiscuitZombies by convention we actually usually define 0 to be of order -infinity. This is intuitively supported by the property order(p(x)q(x))=order(p(x))order(q(x))

    • @BetaKeja
      @BetaKeja ปีที่แล้ว

      The solution does have a Taylor series expansion though, right? If k[x] includes infinite series then there are more solutions than y=0 and the transformation is not injective.

  • @MrRyanroberson1
    @MrRyanroberson1 2 ปีที่แล้ว

    1:40 actually i think... does that make exp a linear transformation?
    let V = R(+, *s), W = R+(*, ^s), where s is an element of the scalars (which is also R)
    for v1, v2 in V: exp(v1 + v2) = exp(v1) * exp(v2); exp(v1 * s) = exp(v1) ^ s
    really amazing how such an operation is still... linear.

    • @APaleDot
      @APaleDot 2 ปีที่แล้ว

      How would such an operation be defined? What kind of object is exp(v1)?

    • @MrRyanroberson1
      @MrRyanroberson1 2 ปีที่แล้ว

      @@APaleDot well, v1 is chosen to be a real number (like 3.4 or e) which is *allowed* to add to other numbers of the same kind or to be multiplied by a real scalar s; it's all about permission, basically, that defines the set. exp(v1) is an element in W, and those real numbers (like e^3.4 or e^e) are allowed to multiply with one another as well as to be raised to the power of any real scalar s; so 100% of these numbers are real numbers (like, number line numbers) but by constraining what operations are allowed we can find equivalences between operations. this forms the basis of things like scientific notation or log table arithmetic: log(x * y) = log(x) + log(y), so if you have a table full of examples of log(x), you can multiply numbers just using addition; this was really really useful before calculators

  • @lucachiesura5191
    @lucachiesura5191 2 ปีที่แล้ว

    Thank you! ker T is just subspace 0, span(0).

  • @BiscuitZombies
    @BiscuitZombies 2 ปีที่แล้ว +1

    Is there a link to the syllabus of this course?

  • @douglasmagowan2709
    @douglasmagowan2709 2 ปีที่แล้ว

    #6 and 7 have fallen out of the Playlist.

  • @ratandmonkey2982
    @ratandmonkey2982 2 ปีที่แล้ว

    what happened to lectures 6 and 7 ?

    • @ratandmonkey2982
      @ratandmonkey2982 2 ปีที่แล้ว

      Oh, lectures 6 and 7 are done, they are just not included in the Abstract Linear Algebra Play List.

  • @artificialresearching4437
    @artificialresearching4437 2 ปีที่แล้ว

    I might be wrong, but this could work: th-cam.com/video/PvUrbpsXZLU/w-d-xo.html
    P.S. A piece of advice: make video 1.5 times faster, I speak very slowly)