Renormalization: The Art of Erasing Infinity

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 มิ.ย. 2024
  • Renormalization is perhaps one of the most controversial topics in high-energy physics. On the surface, it seems entirely ad-hoc and made up to subtract divergences which appear in particle physics calculations. However, when we dig a little deeper, we see that renormalization is nothing to be afraid of and that it is perfectly mathematically valid!
    0:00 Intro
    1:20 Source of Divergences
    3:30 A Simple Analogy
    8:20 Renormalization in Particle Physics

ความคิดเห็น • 337

  • @praveenb9048
    @praveenb9048 ปีที่แล้ว +174

    Reminds me of the story that imaginary numbers were first invented as a part of an intermediate step when solving cubic equations with real roots. They cancelled out and didn't appear in the solution but were an indispensable part of the calculation process.

    • @homejonny9326
      @homejonny9326 ปีที่แล้ว +4

      wow that should be a video about it!!

    • @jkn6644
      @jkn6644 ปีที่แล้ว

      @@homejonny9326 th-cam.com/video/N-KXStupwsc/w-d-xo.html

    • @samuelallan7452
      @samuelallan7452 ปีที่แล้ว +9

      @home jonny Veritasium has one about this I highly recommend you check it out

    • @GGysar
      @GGysar ปีที่แล้ว +5

      You know, you reached the smart side of TH-cam, when the comments of the maths/physiscs video, you watch, talk about another maths video, you have watched.

    • @mage1over137
      @mage1over137 ปีที่แล้ว +3

      It's basically the same thing but with non abelian structures too. In both cases your "modding out" a symmetry.

  • @Williamtolduso
    @Williamtolduso ปีที่แล้ว +17

    Every qft lecturer should just link to this video instead of trying to explain it in front of a live audience.
    Great job

  • @jeffreyhersh908
    @jeffreyhersh908 ปีที่แล้ว +25

    Brings me back to when I was working on my doctoral thesis in the late 90s. Applied a technique called operator regularization along with renormalization to generate effective field terms of the Higgs sector from supersymmetric models. Those calculations were a bear and I remember factors of i and 2 being the bane of my existence doing them.

    • @hooked4215
      @hooked4215 8 หลายเดือนก่อน +3

      All this just to mention that you have a doctorate?

  • @sadface7457
    @sadface7457 2 ปีที่แล้ว +156

    Renormalization is one of those underrate developement in physics. When we talk about the success of the standard model it should be in reference to its taming via renormalization.

    • @MaeLSTRoM1997
      @MaeLSTRoM1997 ปีที่แล้ว +10

      :sad renormalized noises:

    • @rashidisw
      @rashidisw ปีที่แล้ว +2

      I do curious why they haven't tried to scraps the integral all together,
      I mean they have already doing that with UV catastrophic where they replace integral with quanta-sized sums, so why didn't such approach were tested for QCD/QED?
      Why insist on integrals where all the calculation problem stems?

    • @oncedidactic
      @oncedidactic ปีที่แล้ว +1

      @@rashidisw so you suggest that instead of wave functions, there be only discrete values allowable? perhaps those "inner wave functions" lead to continuous probability distributions? :D

    • @misterlau5246
      @misterlau5246 ปีที่แล้ว

      @@oncedidactic input energy (electricity =)) is not discrete, except the stuff it affects. So not all of that continuous domain gives a resulting value 😅🤓

    • @undefinednan7096
      @undefinednan7096 ปีที่แล้ว +4

      ​@@rashidisw your suggestion is essentially lattice field theory, which is an approach heavily used in QCD. Unfortunately, lattice field theory has its limits, including various problems with computer power and that taking the limit of a continuous space-time is difficult.
      Basically, discrete space time isn't Lorentz invariant (with the possible exception of loop quantum gravity, but I don't really understand how that works), so you can't just use some arbitrary discretization or you end up with various problems. Also, lattice fermions are _hard_ -- there are multiple open areas of research dealing with them, like the numerical sign problem and realizing chiral lattice fermions (not chiral symmetry, that's something else).
      Despite these difficulties, lattice field theory, especially lattice QCD, is an essential part of modern physics and solves certain problems that can't yet be solved any other way.

  • @doodelay
    @doodelay ปีที่แล้ว +42

    In this video the creator is fully convinced this isn't a problem but so many other theoretical physicists like Feynman, Freeman Dyson, Dirac and many others believe it a massive issue.
    So it isn't that we don't understand the counterbalancing act, it's that even having to counter balance in this way is itself suspect as there's no natural counterbalance to speak of. It's pure math.

    • @tenbear5
      @tenbear5 11 หลายเดือนก่อน +5

      Yes! Absolutely! It’s total BS.

    • @mathoph26
      @mathoph26 8 หลายเดือนก่อน

      Definitely more than suspect, they refuse to build another theory, it is pure obcession but not physics. They do no want to discover the laws of the universe, they want to impose their laws to it in some way.

    • @naitsirhc2065
      @naitsirhc2065 8 หลายเดือนก่อน +1

      It's because since then we figured out the meaning of these divergences, and how to make sense of them

    • @mathoph26
      @mathoph26 8 หลายเดือนก่อน +1

      @@naitsirhc2065 the meaning of divergence is that your theory is incorrect. When you solve a PDE in physics (theoretical physics consists of solving PDEs), you always decline the divergent solution. Otherwise this is maths not applied to real world.

  • @luckyluckydog123
    @luckyluckydog123 2 ปีที่แล้ว +17

    the best explanation of renormalization on youtube! Thanks

  • @DavidRodriguez-dy1er
    @DavidRodriguez-dy1er ปีที่แล้ว +27

    11:04 "If we define these counterfactors so that it exactly cancels these divergences" I don't see a logical or physical justification for doing that.

    • @mlmii1933
      @mlmii1933 ปีที่แล้ว +3

      It's called future funding.

    • @ayushsharma8804
      @ayushsharma8804 10 หลายเดือนก่อน +1

      It's because they are arbitrary

  • @piercingspear2922
    @piercingspear2922 ปีที่แล้ว +11

    After going around blindly in the past 2 weeks of my QFT lecture about renormalization, I finally have an idea on why they do that and why is it legal lol. Thank you so much about the video! xD

  • @sambhavgupta4137
    @sambhavgupta4137 18 วันที่ผ่านมา

    Amazing explanation!! you are one of the few best explainers out there for physics. Keep working on these.

  • @alberto_7683
    @alberto_7683 ปีที่แล้ว +9

    I am working in top quark pole mass measurements and your explanation was great. It remind me to my old QCD lessons :)

  • @christiannguyen3102
    @christiannguyen3102 ปีที่แล้ว +5

    Woah , feels like im re-reading schwartz ..great overview! , this is awesome to put these high level videos out there

  • @karimshariff7379
    @karimshariff7379 ปีที่แล้ว +6

    I am a layman and just heard (in an interview with Dyson) that even with renormalization, the QED series is still divergent (Dyson, 1952, PRL).

  • @paulkohl9267
    @paulkohl9267 ปีที่แล้ว +22

    20:16 But it is shuffling ∞'s under the rug! And it is all because a non-normalizable basis for the particle wave-functions was chosen when QED was developed (free plane-wave sol. to Dirac eq.), and that is just some of the mathematical problems with QFT: Haag's Thm; Equal Time instead of Equal Event Comm. Rel.; Dyson's Normal Ordering v. Time Ordering Hack, "effective" field theory which allows one to effectively massage whatever answer one wants into a Lagrangian; Canonical Quantization treating clockmaker time in a Newtonian, not SR invariant, way; etc. etc.

  • @petit.croissant
    @petit.croissant 2 ปีที่แล้ว +25

    im really grateful for this video; you manage to explain such a perplexing topic in physics in a simple and concise manner!!

  • @lalalalaphysics7469
    @lalalalaphysics7469 ปีที่แล้ว +8

    Let me get this straight. By introducing adjustable parameters to make the equations work by plugging in somewhat random functions to eliminate any infinities?

  • @user-gv5zh5jv9m
    @user-gv5zh5jv9m ปีที่แล้ว

    I never understood this topic in my classes, I just repeated calculations after lecturer, but I definetely understand something now. Thank you very much

  • @viascience
    @viascience 2 ปีที่แล้ว +15

    Excellent presentation!

  • @kerbydimayuga3856
    @kerbydimayuga3856 ปีที่แล้ว +17

    LOVE YOUR EXPLANATION! Do you recommend any books on perturbation theories and renormalisation? I'd love to deepen my knowledge of this. I just know the bare minimum when I passed my QFT exam

    • @ToriKo_
      @ToriKo_ ปีที่แล้ว

      I’d also like to know

    • @jonasdaverio9369
      @jonasdaverio9369 4 หลายเดือนก่อน

      The classic books talk about them but I don't know in how much detail. Schwartz, Weinberg, Peskin and Schröder, the one we use at my university is Maggiore (I don't know the title of each book but I guess if you just type "[author] quantum field theory" you should find it). I didn't look at them too much, but I think you should find one of them to answer your questions.

  • @mage1over137
    @mage1over137 ปีที่แล้ว +5

    It's easy to imagine 4 dimensions, Imagine an arbitrary number of dimensions and then take the limit as d approaches 4.

  • @nikhilhatwar
    @nikhilhatwar ปีที่แล้ว +3

    For someone who is trying to study QFT on his own, this video is gold. Thank you so much! Looking forward to seeing other videos by you.

  • @montagdp
    @montagdp ปีที่แล้ว +2

    Question: do the infinities arise only in the perturbation methods for solving QFT calculations? In other words, they are not a feature of the full equations? Or we don't know because we can't solve the full equations?

  • @KippGenerator
    @KippGenerator ปีที่แล้ว +1

    Thanks! This really helped me. It has been a while since I looked into this stuff, but thanks to your video I understand it better than ever before.
    Do you also have a video that elaborates on critical exponents?

    • @zapphysics
      @zapphysics  ปีที่แล้ว +3

      Thank you very much! I do not have anything about critical exponents on my channel. Perhaps in the future I will make one, but since my background isn't in condensed matter, I would want to put in a bit of extra work to make sure that I am not explaining things incorrectly lol.

  • @paulkohl9267
    @paulkohl9267 ปีที่แล้ว +3

    Caveat emptor, there are many renormalization schemes. Each scheme has its own advantages and problems, but all of them get way more complicated the more loops one has to consider.

  • @naitsirhc2065
    @naitsirhc2065 8 หลายเดือนก่อน

    Excellent video, this made everything clear to me

  • @thekinghass
    @thekinghass 2 ปีที่แล้ว +1

    Thank for your hard work and the always good contents

  • @tanchienhao
    @tanchienhao ปีที่แล้ว +2

    Was the quadratic equation analogy drawn from anywhere? It is such a good analogy, if you thought of it yourself that’s amazing!

  • @chillphil967
    @chillphil967 2 ปีที่แล้ว

    Excellent description.

  • @frotzecht3461
    @frotzecht3461 ปีที่แล้ว +3

    This is a beautiful exposé. I would just like to add that there is also a physical reason why renormalization is needed, independent of divergences. Namely, in a perturbation theory there’s a mismatch between the measured inputs which always include all orders of the perturbation series and the values used in the calculations which obviously don’t. Especially when taking a value measured in process A and using it to calculate process B it is by no means clear that the perturbation series will work in such a way that this works. The RG equations are a formalization of the requirement that this works. One can construct classical theories where one has this mismatch, e.g. taking Newtonian gravity and adding a perturbation to the potential, what is the mass that enters the calculations? The new gravitational mass or the inertial mass? In the Einstein case both will have to be the same, but then the perturbation series has to work in such a way that maintain this equality up to the order of perturbation considered.

    • @hooked4215
      @hooked4215 8 หลายเดือนก่อน

      "In the Einstein case" both masses are the same because, for this guy, mass was a potential.

    • @frotzecht3461
      @frotzecht3461 8 หลายเดือนก่อน

      @@hooked4215 i don't know what "mass was a potential" is supposed to mean, so I may be missing your point, but note that we're talking about perturbative series here, not the full theory. And that is kinda the point: what makes sense or is the case for the full theory may not make sense or not be the case for the perturbative series if one isn't very careful.

    • @hooked4215
      @hooked4215 8 หลายเดือนก่อน

      ​@@frotzecht3461 I assume that you have studied the short piece by Einstein “On the Origin of Inertia”?

    • @frotzecht3461
      @frotzecht3461 8 หลายเดือนก่อน

      @@hooked4215 Do you mean "Does the inertia of a body depend on its energy content?" i.e. the E=mc² paper?

    • @hooked4215
      @hooked4215 8 หลายเดือนก่อน

      Have you?@@frotzecht3461

  • @joefromzohra
    @joefromzohra ปีที่แล้ว +2

    I'm wondering if these sums are conditionally convergent? And if they are, they can be explained by the Riemann Rearrangement Theory? Just a thought...

  • @josephh891
    @josephh891 ปีที่แล้ว

    Well done. There are too many people who have no understanding of particle physics that make videos claiming that renormalization is spurious. The reality is that they have no clue of the process and indeed no understanding of the subtleties in physics in general. The amount of misinformation they spread, and the number of viewers who believe that rubbish is worrying. It's the same reason why we have "flat earth" dingbats. But it's very normal for some people, who will never understand certain concepts, to just throw in the towel and tell themselves "all of this must be wrong because *_I_* don't get it". lmao. Thanks again for the video.

  • @dor00012
    @dor00012 2 ปีที่แล้ว +7

    Did you mention somewhere what a loop in the feynman diagram actually means?
    Also why would a loop contain some unknow momentum?
    Pretty hard to follow with know these basic stuff. Hard to figure out what we're doing here.

    • @LuWeTs
      @LuWeTs 2 ปีที่แล้ว +7

      When you write the perturbative series of your scattering process, the 0-order is described by a sum of tree level feynman diagrams (diagrams without any loop) that correspond to the classical way of particle to interact via the description of the quantum theory.
      However a quantum theory is constructed to catch the quantum behaviour of the system. The quantum behavior is represented by the higher order of the perturbative series and (using the Feynman rules) you can obtain that the quantum effects are represented by the loop diagrams (because they depends directly on h^bar, tree levels don't depend on h^bar).
      The loops contain unknown momentum due to the conservation of energy and momentum at one particular vertex, if you try to solve the equation for the momentum on a loop vertex you end up with an unfixed momentum flowing around the loop.

    • @ToriKo_
      @ToriKo_ ปีที่แล้ว

      @@LuWeTs how do you even know this stuff

    • @ToriKo_
      @ToriKo_ ปีที่แล้ว +1

      @@bendoverman6242 wow what an waste of everyone’s time your comment was

  • @bahaaphysics10
    @bahaaphysics10 2 ปีที่แล้ว +2

    Well presented.

  • @void_pepsi6405
    @void_pepsi6405 ปีที่แล้ว +1

    love you i dont understand any of this but your voice is soothing

  • @BorisNVM
    @BorisNVM 2 ปีที่แล้ว +6

    You are awesome. Do you know where I can find more info about this line of thought?

    • @zapphysics
      @zapphysics  2 ปีที่แล้ว +8

      @Boris Valderrama Thank you very much! Depending on what you mean by this line of thought, pretty much any QFT book (Peskin and Schroeder, Weinberg, Schwartz, etc.) will discuss renormalization at a technical level. There is also a book by John Collins that is entirely dedicated to renormalization, though it is quite dense if you ask me. As for the connection to singular perturbation theory, I first heard this from one of the professors at my university as a somewhat off-handed comment, and decided to look a bit deeper into the overlap and was shocked to see the similarities. I think he said he first saw it in a book (I think it might have been Goldberg? I'm not totally sure), so I will look into it a bit and let you know if I figure out exactly where he first saw it.
      Just as a warning, these resources tend to require a fair bit of background in physics (I don't know if this warning applies to you, but I figured I would include it in case since books can be quite expensive), but I don't personally really know of any less technical discussions of renormalization.

    • @BorisNVM
      @BorisNVM 2 ปีที่แล้ว +4

      @@zapphysics Thank you very much!!! I've studied QFT for almost a year now and I've always considered renormalization quite difficult to understand. So maybe I need to check out other books, I used to read Ashok Das & Peskin. I'm going to take a look to these other books you are mentioning :) Again, thank u very much.

    • @heywrandom8924
      @heywrandom8924 2 ปีที่แล้ว +2

      @@zapphysics "P. Kopietz L. Bartosch F. Schütz Introduction to the Functional Renormalization Group" is quite nice and i think not too difficult to read but I already knew most of it when I read it so my opinion isn't objective probably. It's nice in particular because it goes into non perturbative renormalization. For a less technical and a bit more conceptual reading maybe "An introduction to the non perturbative renormalization group, Bertrand Delamotte". Also renormalization as you described is more related to renormalization in differential equations to remove secular terms. For example see the articles "A hint of renormalization, Bertrand Delamotte" (on arxiv) , "L.Y.Chen,N.Goldenfeld,Y.Oono,Renormalization group theory for global asymptoticanalysis, Phys.Rev.Lett.73(1994)" and "L.Y.Chen,N.Goldenfeld,Y.Oono,Renormalization group and singular perturbations: Multiplescales, boundary layers, and reductive perturbation theory, Phys.Rev.E54,(1996)"

    • @Milendras
      @Milendras ปีที่แล้ว

      @@BorisNVM you should check the article Renormalization group and singular perturbations: Multiple scales, boundary layers, and reductive perturbation theory by Chen Goldenfeld and Oono 1996, where they explain in detail the links between the two approaches.

  • @patrickguest2762
    @patrickguest2762 ปีที่แล้ว +2

    by far the best explanation I've seen. My prof did a shit job in comparison

  • @sharp7450
    @sharp7450 ปีที่แล้ว +1

    NICE VIDEO!!!!!!! KEEP THEM COMING MY GUY! 😁

  • @MrMas9
    @MrMas9 2 ปีที่แล้ว +1

    Fantastic video man! :)

  • @dukeyin1111
    @dukeyin1111 5 หลายเดือนก่อน

    Wonderful. Now I feel much better

  • @Achrononmaster
    @Achrononmaster 5 หลายเดือนก่อน

    @5:22 not sure if this is where you are going, but I vaguely recall Carl Bender teaching that it is darn good to get divergent series, they sum much faster numerically using Padé approximants or whathaveyou. You do wacky things like stick the ε in the exponent, and then let it go to zero. Or something like this, my memory is bad. There is a trick involved in getting the asymptotics right. I do not know if it helps with Feynman diagram summation, but it's cool stuff anyway.

  • @newbiex11
    @newbiex11 ปีที่แล้ว +3

    In software development we call this Workaround

    • @rosomak8244
      @rosomak8244 ปีที่แล้ว +3

      In maths we call it cheating and wrong.

  • @DMBall
    @DMBall 7 หลายเดือนก่อน +2

    "Renormalization" should come into wider use. For instance Samuel Bankman-Fried really didn't steal all those billions; he renormalized them.

  • @stemenary8786
    @stemenary8786 ปีที่แล้ว +4

    This was an excellent video, thanks!
    Like many other experimentalists, I have always accepted that renormalisation is legit, I just don't understand exactly *why* it's legit. The common hand-wavy argument is that you have some physical finite part, and some unphysical infinite part, and you are using this clever process to separate the two.
    Is the answer that PT calculations give infinite answers because they are missing higher-orders, which would cancel our finite-order loops if we had included them. Whilst not a proof, my intuition is that the high-energy loops correspond to small distances, where we would also expect to resolve higher-order loops if we had included them, but we didn't. To phrase it another way: in PT we include low-order loops which occur with a small resolution, but do not include the high-order contributions at that same resolution (which cancel for some reason?), leading to an unphysical divergence.
    And then with renormalisation, we say that despite that unphysical infinity, PT can still correctly describe the theory at-and-around a given finite scale. And so we fixed the values of our theory to measurable quantities at that well-defined scale (e.g. Z-mass for EWK), and use the (well-described-for-some-reason-I-haven't-figured-out) _scale dependence_ around that scale to make good calculations, provided that we don't deviate too far.
    Is this intuition vaguely correct?

  • @eulefranz944
    @eulefranz944 ปีที่แล้ว +1

    Insanely good thumbnail.

  • @shameer339
    @shameer339 ปีที่แล้ว

    Great explanation

  • @ToriKo_
    @ToriKo_ ปีที่แล้ว +1

    Appreciate the video even if I didn’t understand it

  • @sebastiandierks7919
    @sebastiandierks7919 ปีที่แล้ว +2

    At 12:58, I don't really get the transition from renormalisation to regularisation. If I only integrate the unspecified loop momentum up to a maximum momentum Lambda, the UV-divergence goes away; if I then take the limit Lambda -> infinty again, it's there again. If renormalisation gets rid of the infinity when integrating to infinity, what is the regularisation for? Or do you just mean that all Feynman diagrams including counter term diagrams are understood to be computed up to a maximal momentum, then added to cancel Lambdas, and then take the limit? Maybe I overthought.

    • @zapphysics
      @zapphysics  ปีที่แล้ว +1

      Good question! How it works is that the divergences are always going to appear in intermediate steps, so for example, many different Feynman diagrams in our theory are going to individually be divergent. These have to be regulated.
      Then, assuming we do our renormalization properly, once we sum up all of these contributions, the parts of the regulated integrals that contribute to the divergences cancel out and we are left with something finite.
      So if we choose a UV cutoff regulator as you suggest, we get dependencies on e.g. Lambda^2 or log(Lambda) in individual diagrams. But these drop out once we do our renormalization properly and we are left with something well-behaved as Lambda->infinity.

    • @sebastiandierks7919
      @sebastiandierks7919 ปีที่แล้ว +1

      @@zapphysics thank you! Your explanations are always very clear and helpful.
      So in practice, do you actually have to calculate any counter term diagrams or do you just calculate the normal diagrams using the dressed coupling constants and masses, whose running values with energy scale you know from the renormalization group equations (and an initial condition)? Aren't the counter diagrams the same mathematical expression as the normal diagrams anyway, just with delta Z_g × g instead of g as a coupling constant? I still don't really understand why the infinities went away; if on the left hand side of an equal sign is the sum of bare diagrams, which add to infinity, and on the right hand side is the sum of dressed diagrams and counter diagrams, which describe the same matrix element in the same theory, shouldn't they still add to infinity?

    • @zapphysics
      @zapphysics  ปีที่แล้ว +1

      @@sebastiandierks7919 Fantastic questions! I will start with the first one. You are 100% correct that these two methods are completely equivalent, and which one you use is pretty dependent on the calculation you are actually doing (as well as perhaps personal taste). If you are calculating full amplitudes including all of the finite pieces, then it is probably easier to just use the renormalized parameters in lower-order results exactly the way you suggested. This is straightforward because you typically have the full lower-order result anyway, so it really is just a matter of essentially Taylor series expanding in coupling constants. However, if you want to calculate RG quantities like the beta functions or anomalous dimensions, then you really only need the UV-divergent pieces, so it isn't as common to have the finite pieces as well and it actually turns out that extracting just the UV-divergent pieces is FAR easier than calculating the full diagram. So in this case, you probably wouldn't want to go through the extra work of calculating the finite pieces in order to replace the bare quantities with their renormalized counterparts, and it is easier to just calculate the diagrams with counterterm insertions. If you are feeling particularly spicy, you could even calculate both as a check to make sure they give the same result, but often times that is a little overboard. Personally, I tend to just always calculate all of the counterterm diagrams because I find it a bit more systematic and I feel like it leaves a little less room for error (though, it helps that I use a computer to calculate the diagrams, so I'm not doing it by hand).
      For your second question, I think that it is a bit more instructive to actually flip it around: on the right-hand side, we have a sensible, finite result after renormalization (which remember is really just an physically irrelevant re-shuffling of parameters), but on the left-hand side, we have something divergent. So where are the divergences coming from? One way to think about this is that before renormalization, we are trying to just calculate an amplitude using the same "bare" parameters that we would use in the classical theory. However, if we just try to do this brute-force, we neglect the fact that the *parameters themselves* receive quantum corrections that we are not accounting for. So it is actually quite stupid of us to assume that we would get a sensible result using the same parameters we would use classically. Renormalization is essentially just a way of doing the proper book-keeping to make sure that we account for all of these corrections to the classical parameters and include them systematically in other calculations.

    • @sebastiandierks7919
      @sebastiandierks7919 ปีที่แล้ว

      @@zapphysics I wish you a happy new year and want to thank you very much for your clear and detailed answers, renormalisation and regularisation got much clearer for me now. Probably still have to do more reading in my Peskin Schroeder book, but it's nice to have the right conceptual "hook" in my head to not get lost in the maths.
      I already watched the rest of your awesome Standard Model playlist and did more studying on those topics. As a warning, I will probably ask some more questions on the isospin video :D

  • @humanvoicemail5059
    @humanvoicemail5059 ปีที่แล้ว

    Thank you for this!

  • @andrewberardi6158
    @andrewberardi6158 2 ปีที่แล้ว +5

    I seem to have stepped into the wrong class....

  • @davidhand9721
    @davidhand9721 ปีที่แล้ว +1

    How does effective field theory relate to renormalization? Is it an alternative to it, or is it a rationalizing reframing of it? In other words, are they essentially the same math?

    • @alicewyan
      @alicewyan ปีที่แล้ว +3

      In an effective theory ("non-renormalizable"), you need to set a scale first, and then you can do perturbation theory. But if you need to go above the scale, you'll need to calculate to a higher order in perturbation theory, and then you keep getting new counterterms every time you calculate higher and higher terms. So the math is basically the same, but you need more physical observables to fix the growing number of counterterms.

    • @bhavyasinghal1949
      @bhavyasinghal1949 ปีที่แล้ว

      @@alicewyan i am trying to understand the Coleman Weinberg potential, which clearly requires a proper understanding of renormalization. Can you suggest a book which i can use to understand effective potential better? And also the course of topics to go through in case u are aware of them?

  • @tomaschlouba5868
    @tomaschlouba5868 ปีที่แล้ว

    Perfect video!

  • @jrwarfare
    @jrwarfare 2 ปีที่แล้ว +1

    Best renormalization video.

  • @mathOgenius
    @mathOgenius 2 ปีที่แล้ว

    that was enlightening!

  • @MushookieMan
    @MushookieMan ปีที่แล้ว +2

    How can you make a divergent series into a convergent one and claim you have changed nothing? What have you thrown out?

    • @simonO712
      @simonO712 ปีที่แล้ว

      A simplified explanation is that you basically divide the series/integral into a finite part that has useful information and a divergent part that doesn't, and find a way to get rid of the divergent part.

    • @MushookieMan
      @MushookieMan ปีที่แล้ว

      @@simonO712 Throwing away infinites is a massive change. There's not some change of variables that makes a divergent series converge, unlike what this video seems to say.

    • @simonO712
      @simonO712 ปีที่แล้ว +1

      @@MushookieMan It's been a while since I did these calculations myself (back in 2016 I think) so I may be misremembering, but what you do from what I recall is that you add additional (fully allowed) terms to the Lagrangian that diverge in the same way as the "original" terms do. Thus when you regularize the integrals the diverging terms cancel, meaning that when you then take the limit only the finite part remains. It sounds sketchy for sure and faced a lot of opposition initially, but from what I've understood it can be shown to be mathematically sound. It also results in _extremely_ accurate predictions.

  • @PiXeLSn1p3r
    @PiXeLSn1p3r 2 ปีที่แล้ว

    i love this video, thank you ❤️

  • @disgruntledwookie369
    @disgruntledwookie369 5 หลายเดือนก่อน

    Question: Is the non-renormalizability of GR the *only* problem with quantum gravity, or is there more to it than that? I know there is some confusion about the role of time in QM vs GR as well, but I've found it difficult to really get to the root of the issue. Everyone always repeats that GR and QFT "don't place nice" or are "incompatible" but what exactly is the reason?

    • @zapphysics
      @zapphysics  5 หลายเดือนก่อน

      @disgruntledwookie369 these are all very good questions that hopefully I can somewhat address. I will say, I'm not an expert in quantum gravity, but a lot of the questions can be recast into questions about non-renormalizable theories in general.
      The first thing that I really want to distinguish, and that I probably should have done a better job distinguishing in the video itself, is quantized general relativity versus a true theory of quantum gravity. As I mentioned in the video, quantizing general relativity leads to a "non-renormalizable" theory. Of course, this is where the misnomer comes in: one can still use renormalization to eliminate all UV divergences, but one needs to include an infinite number of interactions in the theory. The result is what looks very much like what we would usually call an "effective field theory." You can think of such a theory as a power-series expansion in E/M where E is the energy scale of some process (e.g. scattering or decay) and M is some characteristic scale of the theory. We run into such a theory in the standard model at low energies, where e.g. beta decays can be represented by four-fermion contact interactions. The standard energy scale of a beta decay is around the mass difference of the proton and neutron, so around an MeV or so. The interaction strength is described by the Fermi decay constant GF, which is around 10^-5 GeV^-2. Notice that the interaction strength is a dimensionful parameter, versus something like the fine structure constant of QED which is dimensionless. So, there is a characteristic mass scale in the theory M~1/sqrt(GF) and we can build a full, albeit non-renormalizable, theory of beta decay as an expansion in E/M.
      For beta decays, this is completely fine: E/M is going to be a very small number, so the series converges very rapidly. In this case, we are fine just calculating the first term or so in the series and we get a quite accurate prediction.
      However, if instead we were to look at a scattering process such as proton + electron -> neutron + neutrino, we can run into a problem when we allow the energy scale of the scattering to grow. Now, as long as E

    • @disgruntledwookie369
      @disgruntledwookie369 5 หลายเดือนก่อน

      @@zapphysicsThanks so much for taking the time to reply!
      So if I follow you correctly, there is more to "quantum gravity" than simply quantizing the gravitational field. In principle one can use perturbative renormalization to quantize gravity, even if in practice it requires infinitely many parameters, but the resulting theory would still be incomplete?
      Regarding the power series expansion in E/M... I understand the mathematical logic here but the physical motivation is slightly lost on me. I get that when E

    • @zapphysics
      @zapphysics  5 หลายเดือนก่อน

      @disgruntledwookie369
      >> So if I follow you correctly, there is more to "quantum gravity" than simply quantizing the gravitational field. In principle one can use perturbative renormalization to quantize gravity, even if in practice it requires infinitely many parameters, but the resulting theory would still be incomplete?
      I think that this is generally a better way to think of it, but perhaps it's best to change the wording a little bit. I would say that quantizing the gravitational field results in a theory which is perturbatively renormalizable up to energy scales around the Planck scale. Given enough terms/parameters (i.e. going far enough into the expansion in E/Mpl), this theory should give the same results as a full theory of quantum gravity in this energy regime. However, the two theories are not truly equivalent and once one leaves the radius of convergence for the effective theory, one can only use the full theory of quantum gravity (which we don't know).
      >> But that point, represented by ~M currently feels a little arbitrary. I'm not too familiar with interaction strengths in QFT but if I've understood correctly you're simply converting the strength (GF or GN) into an energy/mass by the simplest possible operation (i.e 1/sqrt(GF)). I've seen similar dimensional analysis techniques used before but somehow it seems a bit hand-wavey to me.
      That's because it is a bit hand-wavey. It unfortunately has to be, due to the fact that the effective theory doesn't carry all of the information of the full, UV complete theory. In reality, the true radius of convergence of the effective theory is determined by the parameters of the full theory, which we don't know if we only know the effective theory.
      Perhaps this is a better analogy: if we go back to Galileo, he found that all objects have the same, constant (height-independent) acceleration due to gravity. Now, suppose that we want to find new gravitational physics beyond this constant-acceleration framework, so we will say that there is some dependence on height in the new theory that we do not know. By dimensional analysis, we see that, for the height above the surface of the Earth, h, to give contributions to the acceleration, there must be another length scale to the problem, call it R. Thinking about what the physical meanings of this length scale could be, we really only have one option: it is the size of the gravitating body, in this case, the Earth. So, we will assume that this new theory is some series expansion in h/R, so that we can write the true gravitational acceleration of an object as a = g*(1 + c*h/R) + O(h^2/R^2) where g is the standard 9.81 m/s^2 and c is some dimensionless constant. But now, you see the problem: by measuring deviations in the acceleration of an object, we can only probe the *combination* c/R, there's really no way to isolate just R. So, we can assume that c ~ 1, which just is a statement that the only suppression we expect at higher orders is from h/R, so we can get a rough estimate of the size of the Earth. We of course know from Newton's theory of gravity that the radius of convergence of the series is h/R = 1, but since we can only get a ballpark estimate by using the effective theory (adding an h/R term to constant acceleration), we don't know exactly where this radius is, just because we don't have enough information from just this effective theory; we can only see that our perturbative expansion will run into problems when h ~ R/c. In fact, it could be possible that some symmetry or something in the full theory forbids (or approximately forbids) the linear term, resulting in a suppression that we assumed not to be there, giving an incorrect estimate for R. This happens in the standard model with flavor-changing neutral currents and the GIM mechanism, and before electroweak theory was fully understood, it was thought that there was some additional, "superweak" interaction at a scale much higher than the electroweak scale to suppress such interactions.
      >> Naively, given that EWT is considered to be fairly well-understood while gravity is mysterious, I'd have expected the opposite.
      I think the confusion here is coming from not separating the effective theory from the "complete" or full theory. Remember that, according to the standard model, electroweak theory is complete; it is not an effective theory. The effective theory is known as "weak effective theory" or "low-energy effective field theory" (WET or LEFT). The difference is that in WET/LEFT, there are no W, Z, or higgs bosons (or top quarks for that matter), so it does not actually have the full particle content of electroweak theory. The weak interactions are treated using "non-renormalizeable" interactions which are only valid below the electroweak scale ~100 GeV. We say that electroweak theory is well-understood because we know the full UV theory that leads to the interactions that were previously only understood via WET/LEFT.
      Gravity on the other hand, is more mysterious because we don't know the UV complete theory of quantum gravity in the same way that we understand how electroweak interactions arise from spontaneous symmetry breaking of a SU(2)L x U(1) gauge theory. The effective theory of quantum gravity (GR) is very well understood and gives very accurate predictions, but the structure of the theory suggests that it should break down at very high energies.
      >>P.s. I thought one of the big triumphs of the Hawking/Susskind era of black hole physics was the idea that information is "smeared" on the event horizon and ultimately encoded in the Hawking radiation, thus solving the information paradox?
      Again, I will preface with the fact that this isn't really my area of expertise. However, my understanding is that this is a conjectured solution to the information paradox. As far as I know, this isn't an exact result from any UV complete theory of quantum gravity, but shows that if it was a result from such a theory, it would resolve the paradox. There are a few issues, I think: according to GR, there is nothing really special about the event horizon of the black hole, so one needs a modification of GR at the event horizon, which seems to me very difficult to do since you can have quite large black holes where there isn't a strong gravitational field at the event horizon, and modifying GR at weak gravitational fields is going to be tricky. The other main issue is that you still need to stop the black hole from radiating thermally at some point, otherwise it will still evaporate and you then lose the event horizon that you are trying to encode the information on. These problems would need to be resolved in the full UV theory. Again, these problems might be solved, I'm not sure. It's just my take on the matter.
      Hopefully that all makes some sense!

  • @rv706
    @rv706 2 ปีที่แล้ว +7

    Urs Schreiber has written that renormalization (and in general whatever in perturbative QFT is based on S-matrices and not Feynman integrals) isn't controversial at all, and it's a perfectly well-defined mathematical concept. It has to do, as far as I understand, with certain (wannabe) products of distributions that appear in the definition of propagators that are well-defined for non-coinciding arguments but that are ill-defined when the arguments coincide (the wave-front sets of the factors aren't compatible) so one has to construct/choose a suitable extension of the "product" of distributions to the coinciding locus, and this thing is called "adding counterterms" by physicists.
    Apparently, at the time of Feynmann et al., these things weren't clear enough and so the renormalization procedure seemed murky and non legit.
    (Also, according to Schreiber, the IR divergences are bypassed by only considering an algebra of observables localized on spacially-compact subset or something like that.)
    I wish I understood all this stuff, but it seems pretty tough, especially considering I don't know much physics...

    • @piercingspear2922
      @piercingspear2922 ปีที่แล้ว +1

      I wish I understand more of the underlying mathematics instead of just magically putting "counter terms" in the physics theory and the later on questioning about the legality of this procedure in my current QFT lecture xD

  • @thephysicsbootcamp4372
    @thephysicsbootcamp4372 ปีที่แล้ว +3

    Hey thanks for a great video. You commented that the bare parameters can not depend on the energy scale, but in Weinberg Section 12.4 on the floating cut-off, Weinberg says that " one requires the bare constants of the theory (those appearing in the Lagrangian) to depend on cut-off in such a way that all observable quantities are cut-off independent. This seems backwards to me.

  • @fufaev-alexander
    @fufaev-alexander 2 ปีที่แล้ว

    Thank you!

  • @2tehnik
    @2tehnik ปีที่แล้ว

    I don't get it. We can use the renormalization constants because it doesn't tamper with the theory. But it also allows us to cancel the divergences?

  • @anywallsocket
    @anywallsocket ปีที่แล้ว +3

    Is it consistent? Yes. Is it sus? Yes. The infinities associated with things like self-energy can be corrected for by instead of adding new interactions to cancel, one adds new dimensions, e.g., how Wheeler Feynman theory removes classical self-energy simply by adding time invariance. This is still sus however, just less sus imo.

  • @sanketpatel4548
    @sanketpatel4548 10 ชั่วโมงที่ผ่านมา

    wow. anazing description

  • @rodrigoappendino
    @rodrigoappendino ปีที่แล้ว +1

    I watched a video these days where the guys said a lot of physicists from the last century thought that renormalization was an arbitrary way to deal with infinites, and that it was a flaw in our theory. Now you're saying It's completely valid?

  • @disgruntledwookie369
    @disgruntledwookie369 5 หลายเดือนก่อน

    I'm really just a beginner with this topic but I like what you said about "non-renormalizable" being a bit of a misnomer. It seems to me that the big problem with quantum gravity is not so much in the theory but in our ability to mathematically calculate predictions, and without predictions we cannot empirically test the theory or do useful engineering with it. It seems like another 3-body problem to me, where some aspect of the math makes it impossible to find an analytic solution, yet the universe happily "solves the equations" all the time, because it doesn't actually have to *solve* anything, it just does it in real time. Is it fair to say the same about non-renormalizable theories? That all the fuss is more about our ability to calculate predictions than it is about the validity of the theory itself? Like you said, the theory IS renormalizable, you just need to specify an infinite number of terms. Well, there are many things in nature like that, pi has an infinite number of digits for example. So it feels sort of like there is some number, a constant of nature, which represents the "configuration" of the gravitational field. For the EM field this "number" is representable with finite digits, but for GR is isn't. Nonetheless it exists, it's just beyond our capability to ever know it precisely, at least unless we find a more fundamental theory that allows us to calculate this number directly.

  • @amineaboutalib
    @amineaboutalib ปีที่แล้ว +90

    if we define the question exactly like we know the answer is, we get the correct answer, yay

    • @DavidRodriguez-dy1er
      @DavidRodriguez-dy1er ปีที่แล้ว +3

      lmao thank you.

    • @misterlau5246
      @misterlau5246 ปีที่แล้ว +2

      It's statistical mechanics, so what we look for is a distribution.
      Eigenvalues are never the same
      Xd:dx/dt

    • @thelonegerman2314
      @thelonegerman2314 ปีที่แล้ว

      LOCAL ENTANGLEMENT SYMMETRY FOR AS KILLING VECTOR OF BANACH SPACE DIVERGENCE INTO HILBERT SPACE THROUGH KRONECKER

    • @marcusrosales3344
      @marcusrosales3344 ปีที่แล้ว +8

      There's more to it than that.
      In experimental physics for instance you can probe only a finite number of degrees of freedom. Renormalization tells you what DOF are relevant and irrelevant at your energy scale.
      It is actually used in nueral nets as well. Tensor networks build off the density matrix renormalization group (DMRG), and these make more efficient models by dropping DOF which are irrelevant in the entanglement sense, not necessarily in respect to energy. There are a lot of subtleties in RG, but there are physical reasons for doing it!

    • @misterlau5246
      @misterlau5246 ปีที่แล้ว

      @@marcusrosales3344 así es, yes
      You have a certain amount of energy, total = 1, and you need your H to be 1
      🤓

  • @theeddorian
    @theeddorian 11 หลายเดือนก่อน

    One of the peculiarities of discarding infinities is that it ignores some really important implications. For one thing, while discarding all those irritating, frustrating "trivial" values near zero might make sense locally, if you turn the perspective around, any local variations are indistinguishable from zero when added to or subtracted from infinity. If the universe were infinite in space and time and more or less the same as we see it, all those "trivial" values, say of mass, sum to an infinitely greater value than the infinitely minute sample we call the visible universe.

  • @tonyf8167
    @tonyf8167 ปีที่แล้ว +1

    you realize of course, the guy who originally said this was a "good idea" later said it was "garbage"...
    and seriously, listening to you explanation, only confirms this in my mind...

  • @mrchangcooler
    @mrchangcooler 11 หลายเดือนก่อน +1

    How is manually tuning variables to fix equations not just multiplying by zero to get rid of a divide by zero?
    At every turn learning about renormalization, the basis of modern physics, it seems more and more like trying tuning epicycles in the sky to fix planets going retrograde.

  • @mathunt1130
    @mathunt1130 ปีที่แล้ว +1

    Dimensional regularisation was the one thing that was a bit of a mind-bender when I was studying QFT.
    I don't like the way you explained singular perturbation theory. The way we select delta is to balance terms in the equation; this fixes the parameter delta.

  • @mgg4338
    @mgg4338 ปีที่แล้ว

    What is the physical meaning of this pseudo parameters? They may have something to do with the "granularity" of space time?
    Sorry if I ask weird questions but I think that Max Tegmark is onto something...

  • @s-ch4320
    @s-ch4320 ปีที่แล้ว +1

    5:59: for stupid people like me there is a piece of explanation omitted. We recover both solutions because the first group corresponding to epsilon to power of zero is easy to split into two multipliers thus giving us two options of y0: 0 and 2.

  • @guest1754
    @guest1754 2 ปีที่แล้ว

    14:55 "RGE guarantees that physical observable are completely independent of this energy scale."
    I'm a bit confused by this statement. I thought that running of a coupling is a real thing, and that renormalized quantities, which depend on the scale, can always be identified as physically measurable observables.

    • @zapphysics
      @zapphysics  2 ปีที่แล้ว +7

      @guest1754 great question and this is something that troubled me for a long time as well. As it turns out, these running quantities are *not* physical quantites. The physical observables which we can actually measure are things like cross sections and decay rates, which are related to transition amplitudes/correlation functions/Green's functions/whatever else you want to call them. These functions _depend_ on the renormalized quantities, but the quantites themselves can never be directly isolated in an experiment and only appear in the physical quantites in renormalization-scale/scheme-independent ways.
      I don't think we should be particularly surprised by this; after all, we can arbitrarily change the definition of the renormalized parameters by simply changing the renormalization scheme. When we change scheme, this changes a physical observable's dependence on the renormalized parameters defined in the new scheme, but obviously, a physical observable itself cannot change (since this is just a re-shuffling of terms) and so the renormalized parameters can't be physically measurable.
      In fact, even the renormalization group running of these parameters (e.g. beta functions and anomalous dimensions) is dependent on the choice of scheme when one goes beyond the two-loop level in perturbation theory (Terning has a nice, simple proof of this in Chapter 8 of his SUSY textbook). So again, this running cannot be physical since we can arbitrarily change it by changing our renormalization scheme.
      Now, here's where things can get confusing: something like the rest-mass of a particle *is* a physical parameter (this is just the pole in the two-point correlation function, which is a physical observable), but there's no reason that the renormalized mass (the "m" showing up in the Lagrangian) has to be equal to this pole mass. Of course, we can _choose_ our renormalization scheme so that these two coincide (known as the on-shell scheme), but there's no reason we necessarily have to work in this scheme. This does, in fact, often cause a lot of confusion. For example, since the light quarks are always bound together into hadrons, it is impossible to isolate, say, a single up quark to measure its rest mass. Due to this, the "mass" of these particles must be reported in a particular renormalization scheme at a specific value of the renormalization scale! This is why on the quark summary table in the PDG (pdg.lbl.gov/2021/tables/contents_tables.html) there is a disclaimer at the top of the page which gives information for the scheme and scale at which the values are extracted.
      So really, what happens is that the values of these parameters must be fit to experimental data at a particular energy scale (this is just a convenience; there is no reason we necessarily have to choose the renormalization scale to be the physical scale of an experiment, but it makes things easier) based on the calculations of these observables in a particular choice of scheme. This gives us a set of initial conditions for the renormalization group running of the parameters in _this_ scheme, allowing us to determine the same parameters at any other scale for other calculations. Note, though, that if we want to calculate in a different scheme, we either need a different set of measurements or we have to convert the scheme we wish to use to the one that matches with the experimentally determined values. When doing higher-order calculations in perturbation theory, it is actually often very important that you are choosing the renormalization scheme which is best suited for the inputs determined from experiment!
      I hope that cleared some things up, I know this can be very confusing. If you have any more questions, feel free to ask!

    • @guest1754
      @guest1754 2 ปีที่แล้ว +2

      @@zapphysics Oh wow, I did not expect this long of an answer. Thank you very much!
      It did clear up at least one thing. The point that we cannot "observe" couplings or masses per se but infer them from cross section and decay rate measurements is something I totally overlooked. The measurement should not depend on any of the scales or schemes, like you said, but the theory parameters can. It shouldn't matter how we shuffle or scale the terms, at the end of the day the calculations should yield the same results when computing the full perturbation series. In practice, though, the series is truncated (because math is hard), which is why we get scale (but not scheme?) dependence even in predicted cross sections. That's also why the renormalization and factorization scales are a major source of uncertainties on the cross section. The optimal choice of scales should be the one that minimizes the uncertainties, but in principle we could use any scale. It's just that we'd need to compute more terms in the perturbation series, in order to reach the same level of accuracy.
      It bothers me a bit that it's not always clear what is an observable or what is a parameter of the theory. My understanding now is that we can measure both charge and mass of a non-relativistic electron from the effects of macroscopic (Coulombic and gravitational) forces that the particle exerts onto its environment, but as we give the electron more energy, virtual contributions become more important. Since the quantum processes are inherently probabilistic, the only acceptable way of describing the high energy behavior of an electron is through its statistical properties like decay rates and scattering cross sections with other particles. We can parametrize them in terms of its mass and charge that appear as constants in low-energy regime, but as we move up to higher energy scales, we get effects like running of coupling and mass. This is an artifact of the theory because we try to describe high-energy physics in terms of low-energy variables.
      I don't feel particularly comfortable with this topic and I'm not well trained on QFT, as you can see, so feel free to correct me :) Thank you for your time.

  • @KaliFissure
    @KaliFissure 4 หลายเดือนก่อน

    @zap_physics
    Surface(cos(u/2)cos(v/2),cos(u/2)sin(v/2),sin(u)/2),u,0,2pi,v,0,4pi
    This radially symmetric single sided closed surface fulfills the topology of renormalization and twistors.
    There are two regions connected by a point of catastrophe. Where the densy of the manifold reaches maxima.
    But notice that a surface vector of it maintains orientation becomes inverted and there must be a second orbit to reorient. 4pi.
    🖖

  • @Jaylooker
    @Jaylooker 6 หลายเดือนก่อน

    Solving the perturbation theory with the substitution at 6:39 sounds very similar to solving an ODE with its solution being a propagator as a sum exponential.
    This renormalization sounds like Ramunajun summation with an implied analytic continuation. Wick’s rotation is a kind of analytic continuation of time. I’m not whether Wick’s rotation is that analytic continuation.

  • @rockapedra1130
    @rockapedra1130 2 ปีที่แล้ว +1

    Around 4:12 ish, you mention solving each term order by order. It appears you are saying that each term must be equal to zero? I don't understand this. The equation appears to be saying the infinite sum is zero, not that **every** term is zero. I see that if every term is zero, we get a solution but it's unclear to me how many more there are. What am I missing here?

    • @zapphysics
      @zapphysics  2 ปีที่แล้ว +8

      @Andre Sant65 great questions and observation here! In fact, I kind of regret not addressing this excellent point in the video itself. In perturbation theory, there is a sort of underlying assumption that each coefficient in the expanded series is roughly ~O(1). If this is the case, each term in the series essentially "decouples" from every other term simply due to the fact that ϵ is a small number. For example, say that ϵ~1/10. Then, at ϵ^0 order, we expect a result O(1), then the ϵ^1 term should be O(1/10), the ϵ^2 term O(1/100) and so on. So as long as the coefficients of each term aren't super huge or super tiny, the separate terms live at completely different orders of magnitude and so they shouldn't interfere with each other at all.
      However, this does raise the issue of what happens when the coefficients *do* get too large. This is actually another way that perturbation theory can break down, even if the small numbers in the theory remain small. This problem can arise in particle physics contexts when calculating higher-order corrections. Essentially, these calculations can lead to logarithms of mass ratios, e.g. g^2*log(m1/m2) where g is a coupling that we assume is small. So you see that, even if g remains small, if we have largely separated mass scales (i.e. m1 >> m2 or m1

    • @rockapedra1130
      @rockapedra1130 2 ปีที่แล้ว +1

      @@zapphysics Wow, thanks for the thorough reply!
      I think I get the first paragraph (I'm new at this). I think you are saying that provided that every coefficient is roughly the same size, aka O(1), then approximating by "cutting off" at a certain order will have errors smaller than that order, right? So say we cut off at e^2, then the part we left off will cause an error of order e^3 on the right side (won’t be zero). So as we sequentially approximate to higher orders, each preceding order must be zero so our error keeps reducing in order also. So that’s why all terms are set to zero! I think … ?!?
      The second paragraph, I think, was a taste of what is probably a very complicated topic. I liked the example with different energy scales! I didn't quite understand how to deal with this problem but I got a glimmer. Somehow you have to get rid of the offending parts and then compensate for that with extra terms like you did in your video. Very cool!
      Thanks again!

  • @sirknightartorias68
    @sirknightartorias68 2 ปีที่แล้ว

    Is x0 = 1/3 ?? At 4:11

  • @polyhistorphilomath
    @polyhistorphilomath 7 หลายเดือนก่อน

    The initial analogy is troublesome when considering x(2) a function of the perturbation, ε. See 6:08 .
    There is a pole of order 1 at the origin in x(2)(ε) but presumably ε (in accordance with most conventions) was to be a small, positive quantity. If--as it seems--we have x(2) = 2/ε - x(1), then we cannot exactly fix the issue by allowing the perturbation to increase without bound either. If we do, we can recover a non-perturbed relation between x(1) and x(2) (as the 2/ε term goes to 0 in the limit). However we are left with infinitely many perturbed terms of non-finite value.

  • @mehmetirmak4246
    @mehmetirmak4246 หลายเดือนก่อน

    "mathematical trick" really convinced me

  • @michaelalbergo8893
    @michaelalbergo8893 6 หลายเดือนก่อน

    so good, so so good

  • @chrismorrison8047
    @chrismorrison8047 ปีที่แล้ว +5

    If Dirac didn't like renormalization, I'm not entirely sure we should be so comfortable with it. I'm not saying QED is wrong. It's obviously on to something fundamentally correct (pun intended). But, at a minimum, it *does* seem to suggest, to me, that we're missing some really important facts and that the theory is terribly incomplete. And that shouldn't be too controversial a claim. The upshot is that we should be careful how many implications we extract from it, though.

    • @zapphysics
      @zapphysics  ปีที่แล้ว +7

      Hi, thanks for the comment! I've somewhat addressed this in another comment, but I'll reiterate a bit here.
      The thing that you have to keep in mind is that Dirac died almost 40 years ago, before technology had really reached a point where a firm understanding of higher-order perturbative corrections was necessary (after all, it only is really necessary to calculate to the precision of experiments). As such, before the 60s and 70s (pretty late into Dirac's career and life), people didn't really have a full understanding of renormalization and this often led to ugly (and many times, improper) use of the technique. In fact, the renormalization group as we know it didn't even really exist until 1970, so it's no surprise that the "founding fathers" of QFT were pretty skeptical of renormalization, while modern physicists are much more receptive to it. All this to say, with time comes a better understanding, and we now understand what renormalization is actually doing much better than during Dirac's life, especially earlier in his career.
      Really, all renormalization is is bookkeeping. You can sort of think of it this way: if you try to calculate in perturbative quantum field theory with the same parameters you would use in the classical theory, you get divergences. This isn't really surprising, though, since the parameters get quantum corrections. So really, renormalization is just a method of systematically accounting for these corrections and including them later on in other calculations.

  • @Higgsinophysics
    @Higgsinophysics 2 ปีที่แล้ว +1

    ZAP you are the best

    • @zapphysics
      @zapphysics  2 ปีที่แล้ว +1

      @Higgsino physics No u!

  • @willemesterhuyse2547
    @willemesterhuyse2547 10 หลายเดือนก่อน

    Computations with renormalization should be acknowledged as equivalent to measurements.

  • @hooked4215
    @hooked4215 8 หลายเดือนก่อน

    I'm not convinced yet about the validity of this stuff. I guess I should renormalize my mind.

  • @PERF0RMANCEMUSIC
    @PERF0RMANCEMUSIC 9 หลายเดือนก่อน +1

    Numbers don't have absolute values. They carry the values we ascribe to them. Even then the problem continues because we can't have two of anything because no two objects are precisely the same. Even handling them disturbs their composition. The natural numbers invoke the infinite.

  • @Name-ot3xw
    @Name-ot3xw 8 หลายเดือนก่อน

    I've often said that maths should be made more convenient, glad to see one of my ideas being taken seriously for once.

  • @KaliFissure
    @KaliFissure ปีที่แล้ว

    I really think that renormalization is an artifact of the single sided nature of the topology of the universe.
    As does electron 1/2 spin. There is the antimatter universe functionally on the "other side" of the universe/time and yet that antimatter universe is coupled intimately with this universe and thus topologically there is only one side to the universe. The missing klein bottle
    Surface(cos(u/2)cos(v/2),cos(u/2)sin(v/2),sin(u)/2),u,0,2pi,v,0,4pi
    Notice that 4 pi , 2 full rotations are needed to complete the surface. Electron half spin.
    Photon is expressing both sides simultaneously, and is it's own anti when phase inverted.
    Imagine the universe as a plane, time is the infinitesimal thickness, one planck second.
    clockwise on one side of this membrane would be percieved as counterclcockwise from other side.
    an outflow from one side creates an inflow on the other.
    time is a compact dimension, then through kuramoto synchrony we evolve a plane of the present which all points share but because of the lorentzian mess it's impossible to truly place ourselves accurately relative to anything, we have our experience sphere, local present. as experienced.

  • @TomHendricksMusea
    @TomHendricksMusea ปีที่แล้ว +1

    Quote from Secret Melody, by Thuan
    In the past, the appearance of infinite values has always signalled a breakdown of our theories, rather than extreme behavior on the part of the universe. It has been a sign of lack of imagination on our part, rather than a property of nature itself.
    My suggestion is that
    PHOTONS and ELECTRONS / POSITRONS are the same thing. They are two Versions of the same thing.
    Photons create electron positron pairs.
    Electron positron annihilate into energy, photons
    Let's rephrase.
    Photons turn into electron positron pairs under certain conditions.
    Electrons and positrons turn into photons under certain conditions.
    Photons is the speed of light version.
    Positron / electron are the space time version.
    Furthermore the electron and positron are entangled and swich back and forth under changing conditions.
    Are photons created by electrons?
    The movement of electrons is responsible for both the creation and destruction of the photons, and that's the case for a lot of light production and absorption. An electron moving in a strong magnetic field will generate photons just from its acceleration.Apr 19, 2016
    The fact that pair conversion is more active at higher temperatures is a clue to what could have happened at post Big Bang temperatures
    For an electron to quantum jump it needs a photon for energy. Maybe Electrons move continually from photon energy.

  • @davep8221
    @davep8221 ปีที่แล้ว +1

    Kinda the opposite of the old assembly language instruction MBZ == Multiply By Zero to fix things when you accidentally divide by zero.

  • @skylorwilliams5036
    @skylorwilliams5036 ปีที่แล้ว +1

    What I really want to know is this all theoretical or has it been observed?

    • @skylorwilliams5036
      @skylorwilliams5036 ปีที่แล้ว +2

      I’ve been drinking.

    • @SunShine-xc6dh
      @SunShine-xc6dh ปีที่แล้ว

      It's a trick so they can always match theory to what ever experimental results are observed

  • @robinwang6399
    @robinwang6399 8 หลายเดือนก่อน +1

    This feels like epicycles theory of the solar system. It works when supplied with some intuition or facts, but is not physical.

    • @asd-wd5bj
      @asd-wd5bj 7 หลายเดือนก่อน

      Any theory on a quantum scale will feel like epicycle, at this level of complexity we won't get a "pretty" theory no matter what we do

  • @peterkiedron8949
    @peterkiedron8949 ปีที่แล้ว

    Some divergent series can be summed up by summing up various groups of elements in various orders. The problem is that the process is not unique. You can by different summation methods make the final sum be anything. Thus the question is whether the renormalization scheme is unique. If not then differnet renormalization schemes will produce different results.
    Is it possible that theoreticians are guided by experiments and always can find renormalization scheme that produce the experimental result? If it so then this not science in Western sense. It is more akin to how the Vilna Gaol obtained pi from Torah.

  • @wolfgangengler8088
    @wolfgangengler8088 ปีที่แล้ว +5

    Renormalization gives an infinite number of perturbative "counter-terms": before Kepler's time, astronomy likewise used to include an infinite number of perturbative "epicycles". Modern and classical good science and good math always find ways to avoid contrived infinite lists of complications: whether you get these by theoretical math tricks or by experimental phenomenology, needless complexity is always inferior to optimal simplicity. Occam's Razor necessarily shaves away "all but finitely many" corrections; and detailed confirmation and measurement and application of respective finite corrections can go on; and theory goes on too, becoming good experimental and engineering science. Because of how good science needs finite theoretical complexity for experimental logical consistency, scientists generally reject non-renormalizable theories as really bad, and even in the words of Pauli, as "not even wrong" physics.

  • @enterprisesoftwarearchitect
    @enterprisesoftwarearchitect 2 ปีที่แล้ว

    WOW!

  • @alcyonecrucis
    @alcyonecrucis ปีที่แล้ว

    the second level of the QED BABA ?

  • @TomHendricksMusea
    @TomHendricksMusea ปีที่แล้ว

    What if they are correct and ...
    My suggestion is that:
    1. the singularity before the Big Bang was all photons, and
    2. that the universe was made by pair conversion where photons make electron positron pairs.
    Readers challenge me with, how can you prove that?
    Most of it has already been proved!
    These 3 things that we know are true, support many of my ideas on the importance of photons in physics
    1. Photons are outside of time and distance.
    2. Photons create an electron positron pair in pair conversion. ( During extreme conditions photons can create proton, anti proton pairs; and neutron, anti neutron pairs).
    3. Should all the mass be converted to energy, we would have a universe of photons.

  • @ElonTrump19
    @ElonTrump19 11 หลายเดือนก่อน

    It seems that trying to rationalize precieved interactions without addressing design intentions is futile. These numbers mean nothing because the starting point for calculations is arbitrary at best because none of the actual relationships involved can ever be known.

  • @BboyKeny
    @BboyKeny ปีที่แล้ว

    I feel like this is very reminiscent of Hermetic Alchemy.

  • @heywrandom8924
    @heywrandom8924 2 ปีที่แล้ว

    The infrared divergences still need to be considered when the system is fine tuned to a critical point where there is scale invariance. While our world is not scale invariant condensed matter systems can be by fine tuning parameters of the system (for example the critical point of a liquid/gas phase transition) . Also questions regarding fine tuning in hierarchy problems (in high energy physics) can in some cases be understood better by considering that the system is close to a critical point (that might be complex) so critical points are also relevant (to some extent) in high energy physics.

  • @juliusfucik4011
    @juliusfucik4011 11 หลายเดือนก่อน +1

    Even negative numbers are a trick... conplex numbers... etc. As long as the math checks out, it should be fine.
    Not sure about this trick though.

    • @ayushsharma8804
      @ayushsharma8804 10 หลายเดือนก่อน

      No negative number is anymore of a trick than a natural number
      Either all numbers are imaginary or none are

  • @vjfperez
    @vjfperez ปีที่แล้ว

    Looks like a trick, sounds like a trick, smells like a trick, feels like a trick, tastes like a trick

  • @jeff5881
    @jeff5881 ปีที่แล้ว

    A better example would have been taking a divergent series and making sense of it.