NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 พ.ย. 2024

ความคิดเห็น • 150

  • @YannicKilcher
    @YannicKilcher  3 ปีที่แล้ว +25

    OUTLINE:
    0:00 - Intro & Overview
    4:50 - View Synthesis Task Description
    5:50 - The fundamental difference to classic Deep Learning
    7:00 - NeRF Core Concept
    15:30 - Training the NeRF from sparse views
    20:50 - Radiance Field Volume Rendering
    23:20 - Resulting View Dependence
    24:00 - Positional Encoding
    28:00 - Hierarchical Volume Sampling
    30:15 - Experimental Results
    33:30 - Comments & Conclusion

    • @G12GilbertProduction
      @G12GilbertProduction 3 ปีที่แล้ว

      Captions, dear bud'! Caaaaaaaaptions!

    • @jimj2683
      @jimj2683 ปีที่แล้ว

      Imagine this, but using an AI model that is trained on vast amounts of 3d data from the real world. It would be able to fill in the gaps with all the experience it has much more accurately.

    • @THEMATT222
      @THEMATT222 ปีที่แล้ว

      Noice 👍

  • @thierrymilard1544
    @thierrymilard1544 3 ปีที่แล้ว +40

    First time I really truely understand NERF. Wonderfull simple explanation. Thanks a lot !

  • @Jianju69
    @Jianju69 2 ปีที่แล้ว +9

    This type of pre-digestion for a complex technical paper is very expedient. Thank you.

  • @TheDukeGreat
    @TheDukeGreat 3 ปีที่แล้ว +60

    Wait, did Yannic just release a review of a paper I already read? So proud of myself :D

    • @NikolajKuntner
      @NikolajKuntner 3 ปีที่แล้ว

      Incidentally, same.

    • @Kolopuster
      @Kolopuster 3 ปีที่แล้ว +5

      It's even my thesis subject :O

    • @DMexa
      @DMexa ปีที่แล้ว

      So proud of Yaniic bro, he is sharing this awasome knowledge!

  • @adriansalazar8303
    @adriansalazar8303 ปีที่แล้ว +3

    One of the best NeRF explanations available. Thank you so much, it helped a lot.

  • @gravkint8376
    @gravkint8376 ปีที่แล้ว +2

    Gotta present this paper for a seminar at uni so this video makes it so much easier. Thank you so much for this!

  • @vslaykovsky
    @vslaykovsky 3 ปีที่แล้ว +15

    "Two papers down the line" we'll probably see a paper that also infers positions and directions of photos.

  • @peter5470
    @peter5470 10 หลายเดือนก่อน

    My guy, this has to be the best tutorial on NeRF I've seen, finally understood everything

  • @LouisChiaki
    @LouisChiaki 3 ปีที่แล้ว +32

    I feel this approach probably has been used in physic and 3D image reconstruction for a long time with the Fourier decomposition technique (that is renamed as the positional encoding here). The main point is that it is 1 model per object so I feel like it is a curve fitting problem. Though using gradient descent and neural network like framework probably makes it much easier to model.

    • @paulothink
      @paulothink 2 ปีที่แล้ว +2

      Would that be analogous to the DCT technique in video codecs, and that hopefully this could shed some light into a potentially better video codec? 👀

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware ปีที่แล้ว

      DTFT?

    • @tomfahey2823
      @tomfahey2823 ปีที่แล้ว

      The "1 model per object" is an interesting, if not surprising, evolution in itself, as it can be seen as a further step in the direction of neural computing (as opposed to algorithmic computing), where memory/data and computation are encoded in the same structure (the neural network), in a manner not dissimilar to our own brains.

  • @免費仔的驕傲
    @免費仔的驕傲 ปีที่แล้ว +1

    Man you got many clear notes to explained papers. I got tons of helps from your videos.

  • @muhammadaliyu3076
    @muhammadaliyu3076 3 ปีที่แล้ว +5

    UC Berkeley - I salute this University when it comes to A.I. research. In most big paper, you will definitely see one or more scholars from it.

  • @aayushlamichhane
    @aayushlamichhane ปีที่แล้ว +3

    Awesome explanation! Please dont stop making these.

  • @nitisharora41
    @nitisharora41 5 หลายเดือนก่อน

    Thanks for creating such a detailed video on NerF

  • @trejohnson7677
    @trejohnson7677 3 ปีที่แล้ว +2

    The "overfitting" is one of the core principles in Functional Programming/Dataflow Programming. Very awesome to see, wil have to check whether or not it was a locally unique idea, or if it is directly pulling from the aforementioned knowledgebases.

    • @heejuneAhn
      @heejuneAhn 2 ปีที่แล้ว

      Can we say memorizing instead of "overfiitting"? It sounds more intuitive to me.

  • @willd1mindmind639
    @willd1mindmind639 3 ปีที่แล้ว +3

    It more closely represents what happens in the brain where the neural networks represent a coherent high fidelity representation of real world signal information. However, that kind of detailed scene representation normally has a lot of temporal decay with the 'learning" being a set of generalized learning elements extracted from such input info. For example, you could "learn" a generalized coordinate space (up,down, left, right, near, far), depth perception, perspective, surface information (convex, concave, etc), shape information, etc. But that would be another set of networks for specific tasks with less temporal decay and more generalization parameters to allow higher order understanding such as object classification, logical relationships between objects and so forth.

  • @kameelamareen
    @kameelamareen ปีที่แล้ว +1

    Beautiful and Super Intuitive video ! Thanks :3

  • @dsp4392
    @dsp4392 2 ปีที่แล้ว +2

    Excellent explanation. Realtime 3D Street View should be right around the corner now.

  • @Dave_Lee
    @Dave_Lee 2 ปีที่แล้ว

    Great video. Thanks Yannic!

  • @howdynamic6529
    @howdynamic6529 2 ปีที่แล้ว +2

    Thank you for the clear-cut and thorough explanation! I was able to follow and that is definitely saying something because I come from a different world, model-based controls :)

  • @NoobMLDude
    @NoobMLDude 2 ปีที่แล้ว

    Thanks for the Great explanation. Finally understand the central ideas behind NeRF.

  • @任辽-n8n
    @任辽-n8n 2 ปีที่แล้ว

    非常细致的讲解,thanks to you!

  • @YangLi-x9s
    @YangLi-x9s ปีที่แล้ว +1

    Pretty clear and great thanks to you!!

  • @Milan_Openfeint
    @Milan_Openfeint 3 ปีที่แล้ว +1

    Nice, Image Based Rendering strikes back after 25 years.

  • @SanduniPremaratne
    @SanduniPremaratne 3 ปีที่แล้ว +4

    How are the (x,y,z) coordinates obtained for the input data?
    I assume a pose estimation method was used to get the two angles?

  • @tnmygrwl
    @tnmygrwl 3 ปีที่แล้ว +2

    Had been waiting for this for a while now. 🔥

  • @ChuanGuo-y3i
    @ChuanGuo-y3i 2 ปีที่แล้ว

    This video helps a lot for some fresher like me to understand NeRF, thanks!

  • @michaellellouch3682
    @michaellellouch3682 3 ปีที่แล้ว

    Superbly explained, thanks!

  • @Snuson
    @Snuson ปีที่แล้ว

    Loved the video. Learned a lot. Thanks

  • @hbedrix
    @hbedrix 2 ปีที่แล้ว

    awesome video! Really appreciate you doing this!

  • @bilalbayrakdar7100
    @bilalbayrakdar7100 ปีที่แล้ว

    bro you are killin' it, pretty damn good explanation thanks

  • @ethanjiang4091
    @ethanjiang4091 3 ปีที่แล้ว

    I watched the video on the same topic before but got lost, Now I get it after watching your video.

  • @daanhoek1818
    @daanhoek1818 2 ปีที่แล้ว +4

    Really cool. I love getting into this stuff. I'm a compsci student in my first year, but considering switching and going for AI. Such an interesting field.
    What a time to be alive! ;)

    • @minjunesong6667
      @minjunesong6667 2 ปีที่แล้ว +1

      I'm also a first year student, feeling same here. Which school do u go to?

    • @daanhoek1818
      @daanhoek1818 2 ปีที่แล้ว +1

      @@minjunesong6667 The university of Amsterdam

  • @thecheekychinaman6713
    @thecheekychinaman6713 9 หลายเดือนก่อน

    Crazy to think that this came out 2 years ago, advancement in the field is crazy

  • @shempincognito4401
    @shempincognito4401 2 ปีที่แล้ว

    Awesome explanation! Thanks for the video.

  • @juang.8799
    @juang.8799 ปีที่แล้ว

    Thanks for the explanation!!

  • @alpers.2123
    @alpers.2123 3 ปีที่แล้ว +27

    Dear fellow scholars...

  • @canhdz169
    @canhdz169 8 วันที่ผ่านมา

    Thank you so much, it helped a lot.

  • @ferranrigual
    @ferranrigual 10 หลายเดือนก่อน

    Amazing video, thanks a lot.

  • @thomsontg1730
    @thomsontg1730 3 ปีที่แล้ว

    Great explanation, I really enjoyed watching it.

  • @siyandong2564
    @siyandong2564 2 ปีที่แล้ว +1

    Nice explanation!

  • @truy7399
    @truy7399 ปีที่แล้ว

    I was searching for nerf guns, this is better than what I was asked for.

  • @6710345
    @6710345 3 ปีที่แล้ว +11

    Yannic, would you ever review your own paper? 🤔

  • @IoannisNousias
    @IoannisNousias 3 ปีที่แล้ว +3

    D-NeRF
    Great explanation as always Yannic! Will you be doing a follow up on their next paper (D-NeRF), which handles dynamic scenes?

  • @晨希刘
    @晨希刘 3 ปีที่แล้ว

    Wonderful videos! Thanks for sharing~

  • @jaysethii
    @jaysethii ปีที่แล้ว

    Phenomemal video!

  • @Bellenchia
    @Bellenchia 3 ปีที่แล้ว +1

    I heard about Neural Radiance Fields on the TWIML podcast earlier this year, and never connected that it was the same paper Karloy (and now, Yannic) talked about.
    It's funny how we associate a paper with a photo or graphic a lot of the time.

    • @Bellenchia
      @Bellenchia 3 ปีที่แล้ว

      Fellow visual learners feel free to @ me

    • @Bellenchia
      @Bellenchia 3 ปีที่แล้ว

      Also want to mention that you did a much better job explaining this than Pavan Turaga did on the episode in question, so well done Yannic. The episode I'm talking about is called Trends in Computer Vision or something along those lines for those interested.

  • @Dyxuki
    @Dyxuki 5 หลายเดือนก่อน +1

    this is VERY cool.
    but I believe that if given multiple views (image) of a scene, and if we are able to match enough points, it's possible to generate a photogrametric model of the scene (a little bit like stereo vision, but with many more view so the generated model is more complete), and once we have that, we can simply sample or reproject it to any point of view. Isn't that a simpler way of solving the same problem?

  • @우시후-i4w
    @우시후-i4w 10 หลายเดือนก่อน

    Thank u😮😮😮😮😮 amazing description

  • @usama57926
    @usama57926 2 ปีที่แล้ว

    This is mind blowing

  • @bona8561
    @bona8561 2 ปีที่แล้ว +3

    Hi Yannic, I found this video very helpful. Could you do a follow up on instant NERF by Nvidia?

  • @AdmMusicc
    @AdmMusicc ปีที่แล้ว

    This is an amazing explanation! I have a doubt though. You talked about the major question of training images not having information about "density". How are we even computing the loss in that case for each image? You said we compare what we see with what the model outputs. But how does the model give different density information for a particular pixel if we don't have that kind of information in the input? How will having a differentiable function that can backtrack all the way to the input space be any helpful if we don't have any reference or ground truth for the densities in the training images?

  • @vaishnavikhindkar9444
    @vaishnavikhindkar9444 ปีที่แล้ว +1

    Great video. Can you please make one on LeRF (Language embedded Radiance Fields)?

  • @laurentvit3117
    @laurentvit3117 5 หลายเดือนก่อน

    Duuuude i'm learning NERF, and this video is a jewel, thank you!

  • @albertoderfisch1580
    @albertoderfisch1580 3 หลายเดือนก่อน

    BRAVO!!!

  • @agnivsharma9163
    @agnivsharma9163 3 ปีที่แล้ว +3

    Can anyone tell, how do we get the density parameter during training?
    Since, we don't have the full 3D scene?

    • @jeteon
      @jeteon 3 ปีที่แล้ว +2

      The density is something the network makes up. It only exists so that you can use it to say what each pixel from a new viewpoint should look like. If you ask the network to generate a picture you already have then the loss from comparing them gives the network enough information to find a density.
      The density it finds doesn't mean anything outside of the specific algorithm that generates images from the network outputs. Just think of the network as generating 4 numbers and then coupled to some other function h(a, b, c, d) that we use to generate pictures from those 4 numbers. We can name the 2nd number "red" but the network doesn't "care", it's just an output, same as what they chose to call "density".

    • @shawkontzu642
      @shawkontzu642 3 ปีที่แล้ว +2

      The output for training is not the (color, density) array but the rendered images, after the network predicts (color, density) for samples points, this info is then rendered into images using volume rendering technique, so the loss is the error between rendered images and training images instead of the (color, density) array itself.

    • @agnivsharma9163
      @agnivsharma9163 3 ปีที่แล้ว

      Thank you so much for both the replies. Now, that you have explained it, it makes much more sense to do it like that. It also helped me clarify a few doubts which I had with follow-up NERF based papers. Huge help!

    • @jeteon
      @jeteon 3 ปีที่แล้ว

      @@shawkontzu642 Yes, I agree. The (color, density) is an intermediate output that gets fed into h(a, b, c, d) whose outputs are rendered images. The h function doesn't need to be trained though. It is just the volume rendering technique.

  • @michaelwangCH
    @michaelwangCH 3 ปีที่แล้ว

    Cool effect. I saw this on two mlnute papers. To train NN from different perspectives of same object - hard to get the right data.

  • @qwerty123443wifi
    @qwerty123443wifi 3 ปีที่แล้ว

    Awesome! Was hoping you'd I'd a video on this one

  • @CristianGarcia
    @CristianGarcia 3 ปีที่แล้ว +1

    I was a bit confused, I thought this paper already had been reviewed but it was actually the SIREN paper.

  • @isbestlizard
    @isbestlizard 2 ปีที่แล้ว

    You could stack lots of objects so long as you know the transformation from object to world coordinates and give each object a bounding volume in world space for the ray tracer to bother calculating if you had a supercomputer you could render worlds with thousands of overlapping and moving objects :D

  • @jonatan01i
    @jonatan01i 3 ปีที่แล้ว +1

    I've started the "I invented everything" video yesterday and paused to continue today, but it's private now :(

  • @Siva-wv5zk
    @Siva-wv5zk 2 หลายเดือนก่อน

    Great explanation, Thanks Yannic. Shouldn't it be called 6D - x,y,z , Direction, color and density?

  • @starship9874
    @starship9874 3 ปีที่แล้ว +1

    Hey will you ever do a video explaining Knowledge Graphs / Entity embeddings? For example by talking about the "Kepler, a unified model for KE and PLM" paper

  • @dr.mikeybee
    @dr.mikeybee 3 ปีที่แล้ว

    View synthesis shows the power of interpolation!

  • @masterodst1
    @masterodst1 3 ปีที่แล้ว

    Itd be cool is someone combined this with volumetric or loghtfield displays

  • @Bellenchia
    @Bellenchia 3 ปีที่แล้ว +1

    I think the fact that it takes 1-2 days on a V100 is the biggest gotcha

    • @jeroenput258
      @jeroenput258 3 ปีที่แล้ว

      Another gotcha: the view dependent part only comes into play in the last layer. It really doesn't do much but optimize some common geometry.

    • @Bellenchia
      @Bellenchia 3 ปีที่แล้ว

      @@jeroenput258 maybe this is akin to GPT-3's large store of knowledge, where the top layers all store information about the image and the last layer basically just constructs an appropriate image for a given view using the shapes and textures it learned

    • @jeroenput258
      @jeroenput258 3 ปีที่แล้ว

      Perhaps, but I don't know enough about GPT-3 to answer that. What I do find odd is that when you move the ray direction parameter up just one layer the whole thing falls apart. It's really strange Nerf even works imo.

  • @maryguty1705
    @maryguty1705 หลายเดือนก่อน

    So the network only work on one scene? And it is more of a 3D model compressor than a 3D scene generator, am I understanding this correctly?

  • @dr.mikeybee
    @dr.mikeybee 3 ปีที่แล้ว

    How compact are minimally accurate models? How many parameters?

  • @paulcassidy4559
    @paulcassidy4559 3 ปีที่แล้ว

    hopefully this isn't comment isn't in bad taste - but the changes in lighting patterns on the right hand side at 2:51 reminded me a lot of how light behaves while under the influence of psychedelics. yes I'll see myself out...

  • @AThagoras
    @AThagoras 3 ปีที่แล้ว +3

    I must be not understanding something. How do you get the density from 2d images?

    • @arushirai3776
      @arushirai3776 3 ปีที่แล้ว +1

      I don't understand either

    • @jeteon
      @jeteon 3 ปีที่แล้ว

      The 2D images give you multiple perspectives of the same point in space just from different angles. If you combine that information (in this case using the neural network) then you can get a good idea of whether or not there is something at a particular point. Density is not how much mass there is in the volume around a point but rather how much stuff there is at that point that interacts with light.
      Think of it like what people naturally do when they pick something up and look at it from different angles. Each of those angles is like the pictures and the internal idea you would form of what the 3D object looks like is the model that gets trained. By the time you've analysed an object this way, you also can make pretty good guesses about which parts of the object reflect light, how much light, from which angles and what colour to expect. That's basically the model. The density is how much that point dominates the light you get from that point and could be something like 0 to 1 being from completely invisible to completely opaque.
      Also, if you just look at the pictures you train on, your brain can build this model so that you have a good sense of whether or not a purple spot makes sense on the yellow tractor.

    • @thanvietduc1997
      @thanvietduc1997 2 ปีที่แล้ว

      You ask the neural netwok for density information, not the images. The pixels (RGB value) in those images serve as the target for the neural network to train on.

    • @AThagoras
      @AThagoras 2 ปีที่แล้ว

      @@thanvietduc1997 OK. that makes sense. Thanks.

  • @ceovizzio
    @ceovizzio 2 ปีที่แล้ว +1

    Great explanation. Yannic! Like to know if this technique could be used for 3D modelling?

    • @pretzelboi64
      @pretzelboi64 2 ปีที่แล้ว

      Yes, you can construct a triangle mesh from NeRF density data

  • @firecloud77
    @firecloud77 2 ปีที่แล้ว

    When will this become available for image/video software?

  • @marknadal9622
    @marknadal9622 2 ปีที่แล้ว

    Help! How do they determine depth density from a photo? Wouldn't you need prior trained data to know how far away an object is, from a single photo?

    • @YannicKilcher
      @YannicKilcher  2 ปีที่แล้ว +2

      Yes, search for monocular depth estimation

    • @marknadal9622
      @marknadal9622 2 ปีที่แล้ว

      @@YannicKilcher Thank you!

  • @sebastianreyes8025
    @sebastianreyes8025 2 ปีที่แล้ว

    I noticed manny of the scenes were from UC Berkeley, kinda trippy. The engineering school there gave me a bit of PTSD ngl.

  • @JeSuisUnKikoolol
    @JeSuisUnKikoolol 3 ปีที่แล้ว +2

    I would be very surprised to see a similar technology used to render objects inside games. According to the paper, sampling takes 30 seconds on a high end GPU. As games often run at 60 fps, this would only be viable with a speed up of x1800 and it's assuming we only have to render a single object (so realistically speaking we could add another factor of x100).
    This does not mean it is not possible with more research and better hardware but if we compare this to the traditional way of rendering in games, I'm not really sure there is an advantage.
    It's not even something we could not do as we already have photogrammetry to generate meshes from images.
    For non biased rendering ("photorealistic") I could see some use but the learning time is way too high for the moment. One application could be to render a few frames of an animation and use the model to "interpolate" between the frames.

    • @ksy8585
      @ksy8585 2 ปีที่แล้ว

      Now it reached x1000 speedup in both training and inference. What a speed of progress. There is more chance of using this technology where you take a few pictures of an object existing in the real world and reconstruct it as 3d image by training a neural network. Then you can manipulate the image or synthesis 2d images from novel viewpoint, lightening, time-step(if a video) or so.

    • @JeSuisUnKikoolol
      @JeSuisUnKikoolol 2 ปีที่แล้ว

      @@ksy8585 Very impressive. Do you have a link ?

  • @sarvagyagupta1744
    @sarvagyagupta1744 3 ปีที่แล้ว +1

    Hey Yannic. I've been waiting for you to talk about this. Thanks. One question though. The viewing angle, is it like the latitude and longitude angle? We need to values because we need to want to know how that point looks from both the horizontal and vertical angle, right?

    • @quickdudley
      @quickdudley 3 ปีที่แล้ว

      I'd been assuming it was pan and tilt. The full algorithm would need to know the roll of the camera but I don't think that would influence any lighting effects.

    • @ashastra123
      @ashastra123 2 ปีที่แล้ว

      It's spherical coordinates (minus the radius, for obvious reasons)

    • @sarvagyagupta1744
      @sarvagyagupta1744 2 ปีที่แล้ว

      @@ashastra123 but then spherical coordinates have two angles, one wrt y axis and other wrt X axis. So are we using the name nomenclature here?

    • @ghostoftsushimaps4150
      @ghostoftsushimaps4150 2 ปีที่แล้ว

      good question, I also been assuming one angle to measure how much left right, and other to measure how much up down, on a surface of a sphere. So, I am also assuming viewing angle like lat long angles.

  • @ilhamwicaksono5802
    @ilhamwicaksono5802 ปีที่แล้ว

    THE BEST

  • @mort.
    @mort. ปีที่แล้ว

    Is this an in depth breakdown of what photogrammetry or is this different?

  • @jonatan01i
    @jonatan01i 3 ปีที่แล้ว

    This is better than magic.

  • @R0m0uT
    @R0m0uT 2 ปีที่แล้ว

    This sounds as if presentation could be entirely done in a raymarching shader on the GPU as I suspect the evaluation of the model can be implemented as a shader.

  • @herp_derpingson
    @herp_derpingson 3 ปีที่แล้ว +1

    Reminds me of that SIREN paper.

  • @dhawals9176
    @dhawals9176 3 ปีที่แล้ว

    Ancient huh! nice way to put it.

  • @WhenThoughtsConnect
    @WhenThoughtsConnect 3 ปีที่แล้ว

    take pics from all angles of apple, maximize the score to label apple

  • @sofia.eris.bauhaus
    @sofia.eris.bauhaus 3 ปีที่แล้ว +2

    i don't think this should be called "overfitting". as far as i'm concerned, overfitting means learning the input data (or at least big chunks of it) as one big pattern itself, instead of finding the patterns within the data and generalizing from it. this system may be able to reproduce the input data faithfully (i haven't compared them 🤷) but it clearly learned to generalize the spacial patterns of the scene.

    • @mgostIH
      @mgostIH 3 ปีที่แล้ว +1

      It doesn't really generalize anything outside the data it has seen, its job is to just learn *really* well the points we care about, but anything outside that range isn't important.
      Think of it like if you were to train a network on a function f(x) and you are interested on the domain [-1,1]. Overfitting on this domain would mean that the network is extremely precise inside this interval but does something random outside of it, while generalizing means that we also care about having a good estimate of the function outside the domain.
      Here our domain is the parts where we can send rays to, it doesn't really matter what the model thinks is outside the box we never sampled on.

    • @laurenpinschannels
      @laurenpinschannels 3 ปีที่แล้ว +2

      in this context, overfitting might be replaced with undercompression

    • @sofia.eris.bauhaus
      @sofia.eris.bauhaus 3 ปีที่แล้ว

      @@mgostIH yeah, and network that is trained on birds will probably never generate a good squirrel. i don't think neural nets tend to be good at producing things unlike anything they have ever seen before.

    • @jeteon
      @jeteon 3 ปีที่แล้ว

      🤔 That's actually a good point. If it "overfit" it wouldn't be able to interpolate novel viewpoints, just the pictures it was trained on.

  • @piotr780
    @piotr780 ปีที่แล้ว

    so there are really two networks (coarse and fine) or this is some kind of trick ?

  • @jasonvolk4146
    @jasonvolk4146 3 ปีที่แล้ว +4

    Reminds me of that scene from Enemy Of The State th-cam.com/video/3EwZQddc3kY/w-d-xo.html -- made over 20 years ago!

  • @paulcurry8383
    @paulcurry8383 2 ปีที่แล้ว

    Why is this “overfitting”? Wouldn’t overfitting in this case be if the network snaps the rays to the nearest data point with that angle and doesn’t interpolate?

  • @VERY_TALL_MAN
    @VERY_TALL_MAN ปีที่แล้ว

    It’s NeRF or Nothin’ 😎

  • @brod515
    @brod515 7 หลายเดือนก่อน

    where does the scene come from?

  • @NeoShameMan
    @NeoShameMan 3 ปีที่แล้ว +1

    I'm studying light field, the premise makes it not that impressive to me, program a lytro like renderer you'll know what i mean

  • @GustavBoye-cs9vz
    @GustavBoye-cs9vz 5 หลายเดือนก่อน

    7:05 - 7:45 So we use the same neural network for multiple different scenes? - Thats smart because then we dont need to retrain it every time.

  • @usama57926
    @usama57926 2 ปีที่แล้ว

    But can this be used real time?

  • @rezarawassizadeh4601
    @rezarawassizadeh4601 ปีที่แล้ว

    I think, saying that each scene is associated with one single neural network (NN is overfitted for that scene) is not correct.

  • @dvfh3073
    @dvfh3073 3 ปีที่แล้ว +2

    5:50

  • @ankurkumarsrivastava6958
    @ankurkumarsrivastava6958 ปีที่แล้ว

    Code?

  • @govindnarasimman6819
    @govindnarasimman6819 2 ปีที่แล้ว

    finally something without cnns. bravo guys.

  • @fintech1378
    @fintech1378 9 หลายเดือนก่อน

    Python code?

  • @yunusemrekarpuz668
    @yunusemrekarpuz668 ปีที่แล้ว

    Its ilke end of the photogrammetry

    • @notram249
      @notram249 ปีที่แล้ว

      nerf is a step forward in photogrammetry

  • @pratik245
    @pratik245 2 ปีที่แล้ว

    Deep Tesla

  • @Anjum48
    @Anjum48 3 ปีที่แล้ว

    Obligatory "we're living in a simulation" comment

  • @muzammilaziz9979
    @muzammilaziz9979 3 ปีที่แล้ว

    Yannick "not so lightspeed" Kilcher