The Lp Norm for Vectors and Functions

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ก.ย. 2024

ความคิดเห็น • 95

  • @colinpitrat8639
    @colinpitrat8639 2 ปีที่แล้ว +82

    One point missing (or not stressed out enough) is that the higher p is, the more important is the contribution of large errors (e.g points far from the values to evaluate). On the contrary, the lower p is the higher the contribution of the small errors.
    So a large p will favour estimations that have small maximal errors whereas small p will favour estimations that stay close to the function overall allowing large spikes in places.
    This is particularly visible comparing L_1 and L_inf. L_inf will be high for a perfect match except for a single point being far from the function to estimate, when L_0 will be 0. On the opposite, L_inf will be small (=epsilon) for an estimate e(x) = f(x) + epsilon whereas L_1 would be epsilon*(b-a) where [a,b] is the integration domain (so large error).

    • @DrWillWood
      @DrWillWood  2 ปีที่แล้ว +9

      Absolutely, spot on. thanks for taking the time to write this!

    • @hudasedaki5529
      @hudasedaki5529 7 หลายเดือนก่อน +1

      Thank you. That what I was looking for exactly.

  • @shafihaidery848
    @shafihaidery848 3 ปีที่แล้ว +2

    1 year of confuse in my head solved in 7 minutes. god bless you well done

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว +1

      That's great, glad it was useful :-) Thank you so much!

  • @joaofrancisco8864
    @joaofrancisco8864 2 ปีที่แล้ว +11

    One of the clearest explanations I've seen for this topic. Thank you!

  • @madcapprof
    @madcapprof 2 ปีที่แล้ว +20

    The Lp norm,as defined in the video is valid only if p>=1 (not merely p>0).
    The triangular inequality is not satisfied for 0

    • @enterrr
      @enterrr 2 ปีที่แล้ว +1

      So for p

    • @angeldude101
      @angeldude101 2 ปีที่แล้ว

      p < 1 can still yield interesting results such as when plotting the unit "circle," though it technically doesn't give a "norm." Of course once you've started abandoning axioms like that, there's nothing stopping you for using p < 0 either. (p = 0 actually does give a valid _distance,_ but not a valid _norm,_ and only if 0^0 = 0, in which case it's the Hamming Distance.)

  • @bdennyw1
    @bdennyw1 2 ปีที่แล้ว +7

    This has been invaluable! I’m understanding the Norms in machine learning now.

    • @DrWillWood
      @DrWillWood  2 ปีที่แล้ว +2

      That's awesome!

  • @steveneiselen7993
    @steveneiselen7993 2 ปีที่แล้ว +4

    *Outstanding Video!* Informative with both mathematical and behavioral explanations alongside graphical examples; while being succinct and 'to-the-point' without overdoing nor underdoing the level of detail.

  • @mmko7374
    @mmko7374 3 ปีที่แล้ว +5

    heading towards my first exam block of mechanical engineering and this is really making things clearer. If I wasn't living off a scholarship I would ask if I could donate somewhere, I guess a subscription and likes will have to do for now

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Thanks so much. It's great to know this has been helpful! All the best for your exams! :-D

  • @carlosunsolayag2728
    @carlosunsolayag2728 3 ปีที่แล้ว +4

    I was struggling to figure this concept out for a while. Thanks a lot!!

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว +1

      Thank you so much! Glad this helped :-)

  • @emanualmarques5757
    @emanualmarques5757 3 ปีที่แล้ว +2

    Very nice and clear with visualizations. Thank you.

  • @carlosjesuscaro8274
    @carlosjesuscaro8274 3 ปีที่แล้ว +5

    Excellent explanation, very clear. Thank you!

  • @themandlaziman
    @themandlaziman 3 ปีที่แล้ว +13

    thanks for the visualisation here. my lecturers just throw theory at me oof

  • @roshanshanmughan7218
    @roshanshanmughan7218 ปีที่แล้ว +1

    Amazing! Clearly explained! Thank you for the wonderful content!

  • @yashrocks31
    @yashrocks31 2 ปีที่แล้ว +2

    Very well explained. Thank you so much. God bless you!

  • @amriteshsinha437
    @amriteshsinha437 2 ปีที่แล้ว +1

    Using Lp norms in machine learning now. I only ever understood it vaguely. This is the only time I understand this perfectly. Thanks for the video.

  • @mertyazan
    @mertyazan ปีที่แล้ว

    Very clear explanation, thank you! I only wish I have seen your video before, that could have saved me a lot of time!

  • @yidaweng7647
    @yidaweng7647 3 ปีที่แล้ว +2

    Thanks for your explanations, very helpful!

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Thanks, glad it was helpful!

  • @vatandoust
    @vatandoust 3 ปีที่แล้ว +2

    Thank you! Simple and effective.

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Thank you for your kind words!

  • @NguyenTamDanK18HCM
    @NguyenTamDanK18HCM ปีที่แล้ว

    How are you so underrated? Your videos are clear with thorough explanation and nice visual!! Probably the maths on your channel are too niche for the general public haha

  • @fathimahida8878
    @fathimahida8878 2 ปีที่แล้ว

    It was really easy to understand with visualisation ... Thankyou so much

  • @menkiguo7805
    @menkiguo7805 ปีที่แล้ว

    You expanded so well for me

  • @liamhoward2208
    @liamhoward2208 2 ปีที่แล้ว

    Very clear explanation. You have one more subscriber! I also really enjoyed learning what a squwerkal is too!

  • @Via.Dolorosa
    @Via.Dolorosa ปีที่แล้ว

    great one, so super easy to understand, I would like to see a video about norms of a transfer function G(s)

  • @basantmounir
    @basantmounir 3 ปีที่แล้ว +1

    Fascinating! 🤩

  • @glichjthebicycle384
    @glichjthebicycle384 ปีที่แล้ว

    Very very helpful. Feeling a bit more prepped now for my analysis 2 exam c:

  • @r.f2173
    @r.f2173 ปีที่แล้ว +1

    thanks this is great

  • @bizarrelance3698
    @bizarrelance3698 7 หลายเดือนก่อน

    Wow, such a great explanation.

  • @Sckratchify
    @Sckratchify 3 ปีที่แล้ว +1

    great video, it really help me out to understood better the concepts. new subscriber :)

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Amazing, thank you! glad it was helpful :-)

  • @leticiaaires9123
    @leticiaaires9123 11 หลายเดือนก่อน

    amazing video.
    thank you so much.

  • @isxp
    @isxp 3 ปีที่แล้ว +3

    My text refers to these concepts as Matrix Norms. In the text, they give a 5th qualification, ||AB||

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว +2

      I think that matrix norms are a special case of the more general vector norms. In my opinion that inequality would be something that is true for matrices but not a requirement (i.e. something could be a norm without that being true but just so happens that it is true for all the norms we care about :-) ). I of course could be wrong with this though. The reason this inequality is more specific to matrix norms and not normed linear spaces in general is because the ability to multiply two vectors together is not always given for a normed linear space (you can add them together and/or multiply by constants). I hope that makes sense!

  • @giostechnologygiovannyv.ri489
    @giostechnologygiovannyv.ri489 2 หลายเดือนก่อน

    1:36 p = 0 is called pseudo-norm ;)) and measures the sparsity of the vector :D with that your video is complete :D

  • @austinbristow5716
    @austinbristow5716 2 ปีที่แล้ว

    How do you know what Lp norm you would want to use when comparing 2 functions? Would the desired Lp norm change for functions of 3 variables? How does the L3 norm differ in 3_D vs 2_D. Thank you!

  • @theodoremercutio1600
    @theodoremercutio1600 9 หลายเดือนก่อน

    A lovely video! I can't help but ask, where is your accent from?

  • @ishakakyuz8586
    @ishakakyuz8586 6 หลายเดือนก่อน

    Thank you!

  • @zoheirtir
    @zoheirtir ปีที่แล้ว

    Hi Dr Will, at 8.33 L2 is defined as a circle not square

  • @sharonarandia3630
    @sharonarandia3630 2 ปีที่แล้ว

    Thank you so much

  • @SIMITechIndia
    @SIMITechIndia 3 หลายเดือนก่อน

    won't the l2 norm be a circle instead of a square at each point. i.e. similar to a making a 3D shape by rotating each point of g(x) about f(x) as center and then taking the volume of that shape. Of course we are then applying a square root after this.

  • @goddylincoln8590
    @goddylincoln8590 3 ปีที่แล้ว

    You're the best, thank you very much.
    Great work man.
    Just subscribed

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Awesome, thanks so much! :-)

  • @tomkerruish2982
    @tomkerruish2982 2 ปีที่แล้ว +1

    Does anyone ever use L_p norms for anything other than p=1, 2, or infinity? For example, the L_3 norm is well-defined, but does it ever arise naturally?

    • @colinpitrat8639
      @colinpitrat8639 2 ปีที่แล้ว

      L_0 norm can be used sometimes in machine learning (in some particular places) to favour models that have many 0 components. It's simply the number of non zero components of your vector.

    • @enterrr
      @enterrr 2 ปีที่แล้ว

      Norms for p

  • @dansheppard2965
    @dansheppard2965 2 ปีที่แล้ว

    Why is the L2 norm so ubiquitous in applied mathematics, even when there's no direct spacial aspect to the setup, for example in statistics and coding theory? I understand it's nicer to apply than L1 (especially near zero, which is an important point) but is there a good reason other than ease of use?

  • @dsagman
    @dsagman ปีที่แล้ว

    excellent!

  • @canyadigit6274
    @canyadigit6274 3 ปีที่แล้ว

    Another question. Why are functions included in Lp space? A member of the Lp space is a set of numbers (x,y,z,…) that satisfies some axioms, however functions aren’t a single set of numbers. They are multiple sets of numbers since each point of a function has a set of numbers (x,y,z…) describing it.

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว +1

      Hi! great question! I like to think of the Lp norm for functions as a kind of "limiting case" of vectors tending to infinity. This also relates to your earlier question actually. so imagining you have a set of n data points of a function over an interval eg (1,1), (2,2), (3,3), and we increase the number of points (1,1), (1.1,1.1), (1.2,1.2)... (3,3), not increasing the interval width beyond 1-3 in the x coordinate but the number of points between 1 and 3. Essentially, the limit as we reach an infinite number of data points is what a function is (at least that's how I like to think of it). At which point the sum becomes an integral. well, kind of, it works and that's the most important thing I guess! Now, regarding functions and Lp spaces, I think the crucial thing is the function being bounded in an interval otherwise lots of functions would have a norm of infinity which isn't useful. It might be a useful exercise to consider a continuous function over an interval with a given Lp norm (integral version) and ask if it satisfies the criteria of a norm (the 4 criteria at 0:58). Also to give context, I would say Fourier series are the most widely used application of norms involving functions over a given interval. Theoretically Fourier series finds the trig series (q lets say) which minimises the L2 norm of the function you want to approximate (f lets say) minus q, which is the minimum of sqrt(integral from a to b |f(x)-q(x)|^2 ). Hope this was helpful!

  • @canyadigit6274
    @canyadigit6274 3 ปีที่แล้ว

    8:28 why are we integrating? From my definition of the Lp metric we have a sum not an integral
    Nvm figured it out

  • @syedfaizan5841
    @syedfaizan5841 2 ปีที่แล้ว

    Thanks a lot

  • @HarshaJK
    @HarshaJK 2 ปีที่แล้ว

    Thanks

  • @pengfenglin6851
    @pengfenglin6851 2 ปีที่แล้ว

    Super good

  • @SiriusFuenmayor
    @SiriusFuenmayor 3 ปีที่แล้ว

    very nice visual representation

  • @syedfaizan5841
    @syedfaizan5841 2 ปีที่แล้ว

    Need more and more from u

  • @brunomartel4639
    @brunomartel4639 3 ปีที่แล้ว +1

    perfect

  • @osamansr5281
    @osamansr5281 3 ปีที่แล้ว +1

    AMAZING THANK U

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Thank you so much!

  • @guilhemescudero9114
    @guilhemescudero9114 2 ปีที่แล้ว

    Thanks!

  • @davidansi1683
    @davidansi1683 3 ปีที่แล้ว +1

    Woow...thanks

  • @the_informative_edge
    @the_informative_edge 3 ปีที่แล้ว

    Really a great video

  • @manishmiracle4317
    @manishmiracle4317 2 ปีที่แล้ว

    Love from 🇮🇳india

  • @pianochannel100
    @pianochannel100 2 ปีที่แล้ว

    1:04, why is this the case ? ||x-z|| leq ||x-y|| + ||y-z|| ? What does this mean?

    • @tomkerruish2982
      @tomkerruish2982 2 ปีที่แล้ว +1

      Essentially, it means that there are no shortcuts; you can't get from one point to another in less distance by deviating to a third point rather than going directly. It's a holdover from defining a metric space.

    • @pianochannel100
      @pianochannel100 2 ปีที่แล้ว

      @@tomkerruish2982 Ah! Thanks!

    • @pianochannel100
      @pianochannel100 2 ปีที่แล้ว

      @@tomkerruish2982 So if im understanding correctly it means that a straight line is always the shortest distance between two points in the space?

    • @tomkerruish2982
      @tomkerruish2982 2 ปีที่แล้ว +1

      @@pianochannel100 Yes. It's the triangle inequality.

    • @pianochannel100
      @pianochannel100 2 ปีที่แล้ว

      @@tomkerruish2982 Right, well thanks Tom!

  • @yodafluffy5035
    @yodafluffy5035 3 ปีที่แล้ว

    Thx 🙏😇

  • @manishmiracle4317
    @manishmiracle4317 2 ปีที่แล้ว

    What a beautiful mind u have!

  • @T015-f1y
    @T015-f1y 2 ปีที่แล้ว

    U saved my ass! Thx

  • @yupbank
    @yupbank 2 ปีที่แล้ว

    how about l0 norm ?

    • @DrWillWood
      @DrWillWood  2 ปีที่แล้ว

      I think Gilbert Strang discussed this in one of his lectures on matrix methods for data science. He argued that logically this should be the number of none zero components eg || (2,5,0) ||_0 = 2. unfortunately, it fails the triangle inequality so isn't technically a norm! the same is true for any Lp norm where 0

  • @elnotacom
    @elnotacom 2 ปีที่แล้ว

    Why Norm is called by "L"?

    • @DrWillWood
      @DrWillWood  2 ปีที่แล้ว

      To be honest I've never even thought of this! I've no idea but I definitely feel like I need to find out now! thanks for the question.

  • @arberithaqi
    @arberithaqi 3 ปีที่แล้ว +1

    You sound like the guy from Head Space

    • @DrWillWood
      @DrWillWood  3 ปีที่แล้ว

      Love it! I can definitely see that :-)

  • @bugrayalcn6921
    @bugrayalcn6921 2 ปีที่แล้ว

    Good old fashioned subscribe button

  • @user-gh6wd5cv1h
    @user-gh6wd5cv1h ปีที่แล้ว +1

    this is actually very confusing and misleading, why would |x| has to be 1??

    • @aidandavis1780
      @aidandavis1780 4 หลายเดือนก่อน

      It doesn’t have to be- we are just exploring what shape it makes when |x|=1. This is a diamond under the L1 norm, a circle under the L2 norm, and a square under the L infinity norm. Try showing that if we have |x|=R for some R>0 we get the same shape, just scaled by R.

  • @VaradMahashabde
    @VaradMahashabde 2 ปีที่แล้ว

    When I found out all my favorite metrics are actually siblings

  • @eldoprano
    @eldoprano 2 หลายเดือนก่อน

    Ah fk, squircles really exist!

  • @user-sb6os
    @user-sb6os 2 ปีที่แล้ว

    will wood real

  • @derblaue
    @derblaue 2 ปีที่แล้ว

    What about semi-p norms for p ∈ Q, 0 < p < 1? They also have a very interesting unit circle (for x ∈ R^n, (x)ᵢ ≥ 0)