Spectral Theorem For Dummies - 3Blue1Brown Summer of Math Exposition

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ธ.ค. 2024

ความคิดเห็น • 115

  • @schrodingerbracat2927
    @schrodingerbracat2927 3 ปีที่แล้ว +288

    Matrices usually don't commute, because they like to work from home.

    • @toniwalter2911
      @toniwalter2911 3 ปีที่แล้ว +9

      ​@@aaaa8130 to commute means to travel a specific route on a regular basis, for example most commonly from home to work.
      so, if a matrix were to go back and forth between its workplace and home, it would commute.
      finally @SchrodingerBraCat claims that matrices like to work from home (for whatever reason) and that this would be the reason why they don't commute.
      now i'm sure you would have understood all of this just from the first line i wrote but since i was already taking the fun out of the joke i might as well have done it properly.
      anyway i'm done wasting your time now, hope his helps :)

    • @davidk7212
      @davidk7212 3 ปีที่แล้ว +3

      You should be ashamed of yourself 🤣

    • @danielfilippi5201
      @danielfilippi5201 6 หลายเดือนก่อน +1

      I know that i’m startying to understand the subject when i start to get this nerdy jokes😂

  • @iamlucky323
    @iamlucky323 3 ปีที่แล้ว +239

    The last two questions will be left as an exercise for the reader... I was dead!!!
    Really enjoyed the video and the easy to follow explanation!

    • @jacquelinedoan2224
      @jacquelinedoan2224  3 ปีที่แล้ว +17

      Haha I'm glad you enjoyed the joke!

    • @steviebudden3397
      @steviebudden3397 3 ปีที่แล้ว +3

      @@jacquelinedoan2224 Yeah, that tickled me as well. :D

    • @fsponj
      @fsponj 7 หลายเดือนก่อน +2

      ​@@steviebudden3397☠️

  • @FareSkwareGamesFSG
    @FareSkwareGamesFSG 3 ปีที่แล้ว +4

    The first ten seconds felt like what you feel when an advert on tv matches your situation so perfectly, you peek outside through the blinds and close them in fear.

  • @gmatree
    @gmatree 3 ปีที่แล้ว +15

    I'm so glad 3b1b took this wonderful initiative. Your video, atleast for me, is one of the best outcomes which came out of it.

  • @zornslemmon2463
    @zornslemmon2463 3 ปีที่แล้ว +47

    I enjoyed this. I'm a few decades removed from a linear algebra class, so though I know (or knew) all of these concepts, I am a bit rusty and I found some of the content to move too quickly for me, but I don't think I am the intended audience (I do have the advantage of knowing where spectral theory is used and is useful). However, my expectations are not that I fully grasp it on a first run through, which I don't think is generally true for anyone who doesn't already know the content, and that the onus is upon me to rewatch it and, as Grant would say, "pause and ponder" the parts where I am rusty or deficient. For a first-ever math video, you have done an outstanding job and you should both be proud of what you put out. Congratulations.

  • @GabrieleScopel
    @GabrieleScopel ปีที่แล้ว +3

    “Consider the following matrice with real entries” proceeds to put sqrt(-2)

  • @tbttfox
    @tbttfox 3 ปีที่แล้ว +26

    All righty, I'm going to tear into what you've done here, but that's because I think it's *worth* tearing into, and making better. This is a great start, but I don't think it's a great end-product (yet?)
    You're definitely overusing the smear-y text transition. Go watch Mathologer's algebra auto-pilot stuff. Like, really study it. Watch how the symbols move. The work he puts into it has to take forever, but it makes it *so* much easier to follow what's going on.
    For instance, the conjugate transposition at around 4:00. It doesn't show visually what's happening to the components of the matrix as you're explaining it. They way I'd probably do it is: The * should break into 4 bars and smear into the t as the 4 bars move over the components (nothing else should move). Then the T should fade away as you simply swap the bbar and cbar components, leaving the abar and dbar completely unmoved. Not smear them, but simply trade their places.
    Also around 3:00 same deal when describing the inner product. The "linear in the first slot" addition animation should duplicate and move the variables and operators that are the "same", and the lambda should just "hop out" of the multiplication. Also, the left hand side of the equations shouldn't smear at all since they're not changing (top is good, bottom isn't)
    Then at around 6:00, your grid isn't square, so projecting v onto your eigenvectors doesn't *look* like projection, and that's bad. It took like 3 times watching through that part to realize why it looked so wrong. I mean, we were just talking about orthogonal eigenvectors, and those blue vectors sure don't look orthogonal.
    One more thing I thought of. You also take time to re-introduce linear operators and the dot product as a projection/rejection, but you skip over the (arguably more complicated) eigenvector/eigenvalue. An animation following the pre-transformation eigenvectors through the transformation described by the matrix you put on screen would *definitely* help remind people what an eigenvector is, and you wouldn't even have to mention it in your script. Also, it could give them an early intuition of what self-adjoint matrices are, and what transforms they represent.
    A general note: if you show an animation when you're describing something, when re-using that concept later, you should most likely re-use that same (or a similar) animation to drive home your words. You're spending time coming up with a visual language for people to follow. So like if you talk about eigenvectors, follow the transformation. If you talk about projection, drop the perpendicular. If you're doing a linear transformation, show where the basis vectors will end up first, then do the transformation.

  • @pimcoenders-with-a-c1725
    @pimcoenders-with-a-c1725 3 ปีที่แล้ว +43

    "But what are the applications" at the end...
    This is one of the most applicable parts of all of linear algebra haha, it's used sooo much in quantum mechanics!
    In QM, observables are given by hermitian operators (since these have real eigenvalues; it is for example not possible to have imaginary magnetization or energy). The spectral theorem asserts that such an observable will always have an orthonormal basis of eigenvectors, which is extremely useful! It means we can just solve the eigenval/eigenvec problem for our particular operator and we know pretty much everything.
    Also, it will later be proven that if two hermitian operators A and B commute (so AB=BA), then the ONB of eigenvectors they share is the exact same! This fact is also extremely important in quantum mechanics

    • @polyhistorphilomath
      @polyhistorphilomath 3 ปีที่แล้ว

      Maybe there is a meaningful interpretation of a complex value. Consider a complex dielectric constant or resistance/impedance. I am not saying that every component of a geometric algebra is meaningful in all physical contexts, but maybe there is a potential dialectical synthesis which broadens the understanding of the real-valued quantity.

    • @pimcoenders-with-a-c1725
      @pimcoenders-with-a-c1725 3 ปีที่แล้ว +2

      @@polyhistorphilomath Possibly; this is actually already happening with the operators we use for quantum computing; these are non-hermitian, thus giving complex eigenvalues

    • @pimcoenders-with-a-c1725
      @pimcoenders-with-a-c1725 3 ปีที่แล้ว +1

      @@polyhistorphilomath also, dielectric constant and impedance can indeed be complex, but this is because they are either not quantum mechanical observables (but results of many, many interactions) or they are dynamical variables instead of static ones

    • @jdp9994
      @jdp9994 3 ปีที่แล้ว +1

      @@polyhistorphilomath If you're interested in complex physics, have you considered the work on PT symmetric physics (uses non-Hermitian Hamiltonians with real eigenvalues, but complex coupling coefficients)? Carl Bender gives an easy introduction in th-cam.com/video/67ovj4c_VR8/w-d-xo.html

    • @kazachekalex
      @kazachekalex 3 ปีที่แล้ว +1

      Thanks! That application in physics is actually what inspired me to pick this subject for the video, the applications section was more of a joke (:

  • @6884
    @6884 3 ปีที่แล้ว +4

    at 3:50 I would have left-justified the equations, so that when you move the T the left term does not "change". It took me a while to check and double check to make sure nothing actually changed

  • @ruferd
    @ruferd 3 ปีที่แล้ว +4

    I have a solution to the two exercise problems, but this comment box is too small to contain them.

  • @streetos
    @streetos 3 ปีที่แล้ว +2

    Can we just appreciate the incredible graphics? This is eye candy 😍

  • @accountname1047
    @accountname1047 3 ปีที่แล้ว +1

    There's a nice proof of the spectral theorem projecting onto ever more nested subspaces that could have fit in here

  • @WonkaWastelander1
    @WonkaWastelander1 6 หลายเดือนก่อน

    "Matrices are just linear operators in disguise", I like that line.

  • @Jules-Henri-Poincare
    @Jules-Henri-Poincare ปีที่แล้ว

    I love the style and every single word of this video!!

  • @Yaketycast
    @Yaketycast 3 ปีที่แล้ว +2

    I loved this video. The explanation is clear and interesting. And the visuals and so cute! Keep it up y'all!

  • @dannycrytser7268
    @dannycrytser7268 2 ปีที่แล้ว +2

    Nice video. Minor nitpick: in your statement of the spectral theorem, you assert that "the eigenvectors {v_1,...,v_n} of T with eigenvalues {lambda_1,...,lambda_n} form an orthonormal basis for V" which is generally incorrect. The issue arises from saying "the eigenvectors" -- there are lots of them to choose from! If you write down a list of eigenvectors for all the eigenvalues, there is no guarantee that the eigenvectors are unit vectors, and (in the case of repeated eigenvalues) there is no guarantee that the vectors are orthogonal.
    For example, if T is the identity map from R^2 to R^2 (a normal operator), then you could write { (1,1), (2,2)} which would be a set of eigenvectors for the (repeated) eigenvalues {1,1}. However, this is not an orthonormal basis for R^2: the vectors aren't unit vectors and they aren't orthogonal. For normal transformations without repeated eigenvalues we never have trouble with orthogonality, but the unit vector issue can arise: for the diagonal matrix [[1,0],[0,0]] the basis B= {(1,0),(0,2)} certainly consists of eigenvectors for the complete set of eigenvalues {1,0}, but the second eigenvector is not a unit vector and hence B is not an orthonormal basis.
    To create the orthonormal basis of eigenvectors: first find a (typically non-orthonormal) basis B_k for each eigenspace of T by solving Tv=(lambda_k)v, then apply Gram-Schmidt to convert B_k into an orthonormal basis B_k'. (If you have repeated eigenvalues, you only find a single orthonormal basis for each eigenvalue, with number of vectors equal to the number of times that eigenvalue appears.) Then form the union of all these orthonormal bases to get the orthonormal basis B = B_1' u B_2' u ... B_n' for V. (The nice thing about normal operators is that all the different eigenspaces are orthogonal, so B will automatically be orthonormal.)
    A more precise statement: "If T:V->V is normal and {lambda_1,...,lambda_n} are the eigenvalues of T (possibly with repetition) then there exists an orthonormal basis {v_1,...,v_n} for V such that Tv_k = lambda_k v_k for k=1,...n."

  • @donovanholm
    @donovanholm 3 ปีที่แล้ว +1

    Just wrote a test on matrices today yet you still spark my interest!!!

  • @johannbauer2863
    @johannbauer2863 ปีที่แล้ว +1

    small mistake in 0:44, the last matrix isn't real
    EDIT: ah, it's already in the description

  • @Reliquancy
    @Reliquancy 3 ปีที่แล้ว +3

    I thought this was going to be about looking at the eigenvalues and eigenvectors you get from the adjacency matrix of a graph. They call that the spectrum of the graph too I think.

  • @void2509
    @void2509 3 ปีที่แล้ว +1

    This is a really amazing maths video! How can this only have 1k views!

  • @saptarshisahoo5075
    @saptarshisahoo5075 3 ปีที่แล้ว +1

    at 5:50 should not the eigen vectors be orthogonal? The picture doesn't quite capture their orthogonality.

  • @passeur5526
    @passeur5526 10 หลายเดือนก่อน +1

    I’m not a math person and I never have been, I don’t understand any of the stuff in this video but I would love to. When I hear about quantum theory and all these important mathematical concepts I don’t understand them because of a lack of knowledge as well as a lack of comprehension in regards to the limited knowledge I already have. Where can I find out about the real world implications these things have? What do these things mean in regards to life?

  • @leyawonder2306
    @leyawonder2306 3 ปีที่แล้ว

    You guys are awesome, this video clears things up well

  • @redaabakhti768
    @redaabakhti768 3 ปีที่แล้ว +1

    Thank you for the review

  • @robertschlesinger1342
    @robertschlesinger1342 3 ปีที่แล้ว +1

    Excellent overview summary. Very interesting, informative and worthwhile video. I encourage you to make more videos.

  • @mastershooter64
    @mastershooter64 3 ปีที่แล้ว +8

    0:42 ah yes sqrt(-2) definitely a real number

  • @arisoda
    @arisoda ปีที่แล้ว

    5:32 but you should also add in that sentence of what vectors the projections are FROM.

  • @supergeniodelmale2756
    @supergeniodelmale2756 2 ปีที่แล้ว

    This was incredible! Need more!

  • @Speed001
    @Speed001 3 ปีที่แล้ว

    5:35 Exactly, lol.
    While I've probably learned this before, I think I learned different terms for everything making understanding much harder.

  • @simonl1938
    @simonl1938 5 หลายเดือนก่อน

    felt like a bunch of yapping at first but it all made sense with the final part

  • @modolief
    @modolief 3 ปีที่แล้ว +1

    Loved it! Great video, thanks!!

  • @MichaelRothwell1
    @MichaelRothwell1 3 ปีที่แล้ว +1

    As I have just tutored a student through elementary Iinear algebra, this exposition was spot on for me - starting the journey towards C*-Algebras. Nicely paced, nicely explained, well done!

  • @Shaan_Suri
    @Shaan_Suri 7 หลายเดือนก่อน

    What do you mean by the adjoint being the "conjugate" transposition? I suppose conjugate doesn't mean the same as complex conjugate? Could someone please clarify

  • @hannahnelson4569
    @hannahnelson4569 ปีที่แล้ว

    So the spectral theorem is that all eigenvectors of a normal matrix are othagonal?

  • @aziz0x00
    @aziz0x00 2 ปีที่แล้ว

    The intro and the outro are hulariouussssss

  • @AlainNaigeon
    @AlainNaigeon 3 ปีที่แล้ว

    Why music in the background ???

  • @syllabusgames2681
    @syllabusgames2681 3 ปีที่แล้ว +12

    A “Things To Know” page, that’s what every other video has been missing. I don’t know what a conjugate transposition is though, and I don’t think that was really on the list.
    You lost me. I have stumbled through a few of these videos, but this one I just didn’t get. If you had stretched this into twenty minutes, I might have been able to catch on, but thankfully you didn’t.
    The video was concise and well made, and aside from getting a little too far from the mic at one point, I don’t really know how you could improve it other than making it part of a series to fill out the prerequisites.

    • @jacquelinedoan2224
      @jacquelinedoan2224  3 ปีที่แล้ว +3

      Thank you for your feedback! The sound quality is definitely something we have to work on, as this time we were still figuring out how to use the mic 😅 You brought up a good point about compromising the video's self-containedness for conciseness. Alex and I are still figuring this out, so we really appreciate your comment!

    • @carterwoodson8818
      @carterwoodson8818 3 ปีที่แล้ว

      @@jacquelinedoan2224 so does the conjugation mean the complex conjugate then??

    • @steviebudden3397
      @steviebudden3397 3 ปีที่แล้ว

      @@carterwoodson8818 Yup. Reflect the matrix in the leading diagonal and then take complex conjugates of the entried.

  • @miguelriesco466
    @miguelriesco466 ปีที่แล้ว

    I think that the video was really cool. However, it should be noted that many things in this video only hold when you’re working with finite dimensional vector spaces. But not all is lost: there is actually a generalization of this theorem in infinite dimensions. It is the spectral theorem for compact normal operators. Compact operators behave very similarly to operators in finite dimensional spaces. But a few more courses are needed to understand the topic in depth, notably courses in topology, complex analysis and functional analysis.

  • @DavidBrown-nd7lz
    @DavidBrown-nd7lz 2 ปีที่แล้ว

    Do you guys watch Sisyphus 55 your art style reminds me of his.

  • @quantum5867
    @quantum5867 3 ปีที่แล้ว

    Will you tell me the font you used to wrote spectral theorem

  • @Nathouuuutheone
    @Nathouuuutheone 3 ปีที่แล้ว +1

    Things to know:
    5 things I do not know and have never heard about
    Nice

    • @Nathouuuutheone
      @Nathouuuutheone 3 ปีที่แล้ว

      While I'm here... Why the hell do I know NOTHING about matrices and they get mentioned EVERYWHERE? Is it that abnormal to not know of them? Is it supposed to be a highschool notion? Cause it certainly wasn't in my highschool.

  • @byronvega8298
    @byronvega8298 3 ปีที่แล้ว

    Does anyone know why adjoint operators are defined that way?
    About the applications, this theory is important to explore the realm of partial differential equations. The book by Olver on the topic is pretty good.

  • @tariklahcen9928
    @tariklahcen9928 ปีที่แล้ว

    Many thanks for this video.

  • @BenjaminLiraLuttges
    @BenjaminLiraLuttges ปีที่แล้ว

    What is the name of that font?

  • @jojodi
    @jojodi 3 ปีที่แล้ว

    Great videos! What music is this? :)

  • @astroceleste292
    @astroceleste292 3 ปีที่แล้ว

    can you put subtitles? the automatic captions are crap for mathematical terms.

  • @nUrnxvmhTEuU
    @nUrnxvmhTEuU 3 ปีที่แล้ว +2

    A 7min-long video that aims to explain the (finite-dimensional) spectral theorem, and yet it assumes the viewer knows how to diagonalize a matrix? That seems like a really odd choice, considering the spectral theorem literally says that "normal matrices are orthogonally diagonalizable". I would assume most people either struggle with diagonalization, or already know what the spectral theorem is. Who is the intended audience then?

    • @marcuslaurel5758
      @marcuslaurel5758 3 ปีที่แล้ว +1

      I’d imagine there are many introductory linear algebra courses which talk about how to diagonalize a matrix, but fail to go into further depth beyond that. While an introductory linear algebra course can be more theory based than any previous math course, at the end of the day it can still be mostly a course in computing things involving matrices, leaving out much of the deeper theory and the more mathematically technical theorems.

    • @kazachekalex
      @kazachekalex 3 ปีที่แล้ว +1

      Our goal was to introduce a geometric interpretation of what is happening when you diagonalize a finite-dimensional operator. Lots of people learn theorems about diagonalization as a rote process of manipulating boxes of numbers, with no intuitive understanding of the process

    • @morristgh
      @morristgh 3 ปีที่แล้ว +1

      @@kazachekalex I think you did a great job. I was taught all of these concepts in theoretical chemistry but the video really helped deepening my understanding of it!

  • @annaclarafenyo8185
    @annaclarafenyo8185 3 ปีที่แล้ว +6

    The name 'spectral theorem' is reserved for the infinite dimensional case. Diagonalizing an nxn symmetric or Hermitian matrix is trivial.

    • @thatkindcoder7510
      @thatkindcoder7510 3 ปีที่แล้ว +2

      *Whips out dictionary*
      Time to understand

    • @annaclarafenyo8185
      @annaclarafenyo8185 3 ปีที่แล้ว +4

      ​@@thatkindcoder7510 It's not that hard to understand. Finite dimensional matrices which are symmetric can be diagonalized step by step, by first applying them to a vector again and again (this quickly produces the largest eigenvector), then considering the matrix restricted to the perpendicular space to the vector. This is an inductive procedure, and it diagonalizes any symmetric matrix step by step, by induction on the dimension. The complex analog of symmetric is Hermitian (symmetric after complex conjugation), and the same thing applies there.
      But the 'spectral theorem' is a result about function spaces, about INFINITE dimensional 'matrices'. These are the ones that show up in quantum mechanics, or in applications to PDEs or whatever. An infinite dimensional matrix is a linear operation on a function space.
      For example, consider the differentiable functions on the interval [0,1] with periodic boundary conditions. That just means periodic functions f(x+1)=f(x). This is a vector space, because you can add two such functions pointwise and they are in the same class. Any linear operation on these functions is a 'matrix' of sorts, but now with infinite dimensions.
      So consider the operator 'differentiation'. This has eigenvectors those functions whose derivative is proportional to themselves, i.e. exp(kx) for some k. To be periodic, k has to be imaginary and a multiple of 2\pi.
      The spectral theorem then tells you that this is a basis for the space, i.e. that any differentiable function can be written as a combination of these basis functions.
      Now consider the same operator acting on the vector space of functions on [0,1] with zero boundary conditions. The operator is still formally anti-symmetric (on those functions where it is defined and takes function in the space to other functions in the space), so i times the operator is 'hermitian', but now it has no eigenvectors in the vector space, because the eigenvalues should still be exp(ikx), and these don't obey the boundary conditions.
      So some infinite dimensional operators admit a basis of eigenvectors, some don't. The ones that do are a subset of the really self-adjoint ones, including the 'compact operators', which admit a limiting procedure to find the eigenvectors step by step, like in the finite dimensional case. These infinite dimension irritations, the distinction between 'symmetric' or 'hermitian' and 'self adjoint', the limit difficulties that sometimes the eigenvectors are distributional and don't even lie in the space you are analyzing, these are what make the spectral theorem interesting.
      The video doesn't discuss these issues, just diagonalizing finite dimensional matrices, which is easy. Physicists tend to diagnolize finite dimensional approximations, and then check the limit is sensible (which it sometimes is, and sometimes it isn't).

    • @marcuslaurel5758
      @marcuslaurel5758 3 ปีที่แล้ว +6

      @@annaclarafenyo8185 “it’s not that hard to understand”
      *proceeds to write a short novel on the subject*
      All math is trivial if you study it for long enough. No one wants to see you reinforce your own comfortability/expertise in a subject via an esoteric wall of text written under the false pretense that what you’re writing is easy to understand for someone who perhaps is hearing of this stuff for the first time. The plethora of mathematical machinery you’ve assumed as prior knowledge in your reply is covered in years worth of college level math courses. What shouldn’t be hard to understand is that new math can be difficult and shouldn’t automatically be treated as easy by someone for which said math is not so new.

    • @annaclarafenyo8185
      @annaclarafenyo8185 3 ปีที่แล้ว +3

      @@marcuslaurel5758 I haven't assumed anything, and, yes, what I wrote is not difficult, nor is it a 'short novel', it's the simplest example of infinite dimensional function spaces and the issues with diagonalizing infinite dimensional matrices.

    • @imsleepy620
      @imsleepy620 3 ปีที่แล้ว +2

      @@marcuslaurel5758 true

  • @aflah7572
    @aflah7572 3 ปีที่แล้ว +1

    Great Stuff!

  • @pureatheistic
    @pureatheistic 6 หลายเดือนก่อน

    The explanation was top notch, but the presentation can use some touching up. Such as pacing, and cadence of your speech. Sounded like a high-school presentation by students who were strictly reading from a script they borrowed heavily from Wikipedia.
    The second way you could elevate the presentation is a better mic. But you don't gotta drop a ton of money to make a budget mic that gets rid of echo and room noise. Some eggshell bed foam and panty hose can be rigged around the mic and your voices will come through much clearer. Just make sure to speak facing directly into the mic.

  • @philkaw
    @philkaw 3 ปีที่แล้ว +7

    This is poggers maths conent!

  • @awsmith1007
    @awsmith1007 6 หลายเดือนก่อน

    Wonderful video

  • @Iceman3524
    @Iceman3524 3 ปีที่แล้ว +1

    This is solid

  • @W1ngSMC
    @W1ngSMC 3 ปีที่แล้ว

    4:01 Shouldn't that A matrix be A*T or A† (dagger) instead of just A*.

    • @kazachekalex
      @kazachekalex 3 ปีที่แล้ว +1

      It's a difference of notation between mathematical physics (where * denotes conjugates and dagger is the Hermitian adjoint) and pure mathematics (where * denotes Hermitian adjoints and \overline is for the conjugate)

  • @noamzilo6730
    @noamzilo6730 2 ปีที่แล้ว +1

    I wish all of this existed without the music. It really makes me want to not listen to the end, though It is a really good resource otherwise

  • @nk2904
    @nk2904 3 ปีที่แล้ว

    Subbed with that first bit… the last two questions will be left as an exercise… 🤣🤣

  • @abhijeetsarker5285
    @abhijeetsarker5285 3 ปีที่แล้ว

    good job....very awasome video!

  • @nanke1987
    @nanke1987 3 ปีที่แล้ว

    here for the comedy

  • @freddyfozzyfilms2688
    @freddyfozzyfilms2688 2 ปีที่แล้ว

    yoneda lemma ?

  • @genericperson8238
    @genericperson8238 2 ปีที่แล้ว

    Really good video, but please pause some between statements. Things move way too fast and it Is a bit annoying to always scroll back to pause manually.

  • @codehere142
    @codehere142 2 ปีที่แล้ว

    opnesource the code from the video bro

  • @ralph3295
    @ralph3295 7 หลายเดือนก่อน

    good vid ! but I think the animations could have been played a lot slower to give time to think (or pause and ponder as 3B1B says). leave more silent gaps and speak slower. thank youu

  • @paradoxicallyexcellent5138
    @paradoxicallyexcellent5138 3 ปีที่แล้ว

    The cuts were too jerky. Gotta leave equations and images on screen an extra half-second to second.

  • @davidk7212
    @davidk7212 3 ปีที่แล้ว

    Nice, thank you

  • @usernameisamyth
    @usernameisamyth 3 ปีที่แล้ว

    Good stuff

  • @lexinwonderland5741
    @lexinwonderland5741 ปีที่แล้ว +1

    Well, it's a few years late, but I just stumbled on this video after watching th-cam.com/video/Mx75Kiqyaik/w-d-xo.html series, and this explains SO incredibly well!!! great job!! shame this didn't make bigger waves in SoME1 or 2, y'all deserve more credit!!

  • @SweatyMathMajor
    @SweatyMathMajor ปีที่แล้ว

    I'm just like Jackie in linear algebra; stiff hair and huge glasses.

  • @agam5429
    @agam5429 2 ปีที่แล้ว

    W O W

  • @emmepombar3328
    @emmepombar3328 3 ปีที่แล้ว +8

    You lost me. Although I know all the things of the "Things To Know" page, you lost me even in the initial explanation of these points. Way too fast and the images didn't help at all. It's defintively not "for dummies".

    • @polyhistorphilomath
      @polyhistorphilomath 3 ปีที่แล้ว +1

      Summary: [some] matrices (like derivatives) can be thought of as defining or being defined by the effect applying such objects will have on the vector or scalar operated on.
      So figure out the zeros of det(A-λI) for your matrix. These values are representative of the whole matrix A.
      Then you can conjugate and perform whatever operation you want.
      g h g^-1 or g^-1 h g lets you use the values on the diagonal (once you have diagonalized your matrix) as scalars. And you still have a valid result.
      Also the inner product generalizes the dot product. That’s with hardly any restrictions on what the operands are.
      If you have two abelian operations on a set of vectors you can show it (with some additional constraints, maybe?) is closed. Meaning you don’t need to worry about performing only finitely many operations on the elements. Closure.
      But even in this context operators like T have a 1-to-1 correspondence with matrices. So you can decompose T, an operator in a Hilbert space, the same way you can decompose a matrix.

    • @polyhistorphilomath
      @polyhistorphilomath 3 ปีที่แล้ว +1

      This means you can apply the matrix exponential to T and apply that functor to a vector in the space. This is maybe two steps away from using a Fourier or Laplace transform on T. Heck, you could apply a translation operator, assuming that your differential operator D is well-defined. The world is your oyster.

  • @shubhamchoudhari1489
    @shubhamchoudhari1489 3 ปีที่แล้ว

    Make more such videos

  • @mimithehotdog7836
    @mimithehotdog7836 3 ปีที่แล้ว

    very cool

  • @Landa88
    @Landa88 5 หลายเดือนก่อน

    For dummies yeah right 💀💀

  • @polyhistorphilomath
    @polyhistorphilomath 3 ปีที่แล้ว

    I took exception to your statement that orange plus blue is me. It’s not. I am not the secondary or tertiary color you think I am. My life is a lie.

  • @DC430
    @DC430 3 ปีที่แล้ว +2

    The alternating voiceover was detrimental to the video. Just stick to a single narrator

  • @chandrashekarramachandran9769
    @chandrashekarramachandran9769 2 ปีที่แล้ว

    wtf

  • @LaureanoLuna
    @LaureanoLuna 3 ปีที่แล้ว

    3blue1brown as sloppy as usual, always assuming a rushing graphic can make for a real explanation.

  • @khudadatbaluch7884
    @khudadatbaluch7884 3 ปีที่แล้ว +3

    it is good, but i could not understand, get better, master it and get the inner story, point out, make me under stand, i am a lay man

    • @jacquelinedoan2224
      @jacquelinedoan2224  3 ปีที่แล้ว +3

      Thank you for your comment. Our intended target audience is students with some background in linear algebra, so we understand that it might be difficult to follow as a layman. We will improve on self-containedness in later work!

    • @khudadatbaluch7884
      @khudadatbaluch7884 3 ปีที่แล้ว +1

      @@jacquelinedoan2224 it is not nicely done, i teach math, but but you have to be better than this

    • @emmepombar3328
      @emmepombar3328 3 ปีที่แล้ว +1

      @@jacquelinedoan2224 I have a master degree in computer science and had a lot of math but still couldn't follow the video, because it was too fast and the pictures were poor.