NERFs (No, not that kind) - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2023
  • Neural Radiance Field - NeRF - Dr Mike Pound and PhD Student Lewis Stuart demonstrate how a series of photos or a piece of video can lead to a 3D model in your computer.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com
    Thank you to Jane Street for their support of this channel. Learn more: www.janestreet.com

ความคิดเห็น • 133

  • @Computerphile
    @Computerphile  4 หลายเดือนก่อน +100

    The previous version of this video had some scenes which were insensitive in the light of the tragic events in Prague. We apologize if the video’s content was insensitive given the context of the tragedy. It was never our intention to cause any harm or offense and we were not aware of the tragedy at the time of the video's edit & subsequent release. Those parts of the video have since been removed -Sean

    • @azrobbins01
      @azrobbins01 4 หลายเดือนก่อน +13

      Do you think you will make the original version public after some time has passed? I left a comment on the other version and it shows as still being there, but the video is private.

    • @jamie_ar
      @jamie_ar 4 หลายเดือนก่อน +51

      The fact you felt the need to do this highlights a big issue in today's society.

    • @bectionary
      @bectionary 4 หลายเดือนก่อน +2

      Out of interest, do you remember how long the original video was?

    • @arinc9
      @arinc9 4 หลายเดือนก่อน +15

      @@jamie_ar How much sensitivity is too much sensitivity? Guess we'll never know.

    • @jub4346
      @jub4346 4 หลายเดือนก่อน +20

      who finds nerf fights offensive ? all you did was play on words

  • @derrickobara6806
    @derrickobara6806 4 หลายเดือนก่อน +155

    Is it possible, considering Lewis's specialty, we could call him a nerf herder?

    • @klaxoncow
      @klaxoncow 4 หลายเดือนก่อน +14

      Though he needs to look a bit more disheveled, I feel, to truly be a scruffy nerd herder.

  • @Guus
    @Guus 4 หลายเดือนก่อน +52

    Dang I kinda expected you guys to explain more in depth how it actually worked. I feel like Lewis was about to get into the good stuff but was then a bit cut off to just give a basic demonstration instead. Would love to see a longer video starring Lewis where he can take all the time he likes to explain it further, here or on his own channel :)

    • @channel11121
      @channel11121 4 หลายเดือนก่อน +3

      Would love to see that, as well!

    • @tsunghan_yu
      @tsunghan_yu 28 วันที่ผ่านมา

      Same!

  • @stephenmurray7495
    @stephenmurray7495 4 หลายเดือนก่อน +12

    I do love Dr P's enthusiasm. He seems more like the mischievous student himself

  • @MrMdb81
    @MrMdb81 4 หลายเดือนก่อน +6

    One of the coolest things about NERFs is how they handle reflections within the image, where you can control the viewpoint to actually enter inside the scene within the reflection like it is its own little mini-universe.

  • @TheZaxanator
    @TheZaxanator 4 หลายเดือนก่อน +43

    Great video, Nerf seems like a really interesting technology. I'd love a follow up about gaussian splatting

  • @AloisMahdal
    @AloisMahdal 4 หลายเดือนก่อน +4

    The "bad" angles are also kinda awesome, though.
    I could see this being used artistically.

    • @kevinkor2009
      @kevinkor2009 4 หลายเดือนก่อน +1

      It could represent flying through a dream or a multiverse and finally snapping into focus when you reach a destination.

  • @gabrigamer00skyrim
    @gabrigamer00skyrim 4 หลายเดือนก่อน +1

    When I saw the thumbnail I was expecting the video to be about Neural Radiance Fields. When they said (no, not that kind) I was then expecting a video of Dr. Mike playing with dart guns.
    Happy to get the former but sad for not having the latter

  • @Locut0s
    @Locut0s 4 หลายเดือนก่อน +5

    Having played around with ray tracers in the 1990s, povray and the like, as well as some 3d modelling from that era… all of this is just insane to see.

  • @grantpottage
    @grantpottage 4 หลายเดือนก่อน

    Really enjoyed this video. The discussion was quite interesting, and I appreciated the insights shared. The duo of both Mike and Lewis brought an engaging and insightful presence to the conversation that added to the overall enjoyment.

  • @aelolul
    @aelolul 4 หลายเดือนก่อน +1

    Good timing. I was just playing with a demo of SMERF which builds on the technique. I'd love a deeper dive on these techniques!

  • @amadzarak7746
    @amadzarak7746 4 หลายเดือนก่อน

    I’ve been waiting for this one! This is great

  • @diophantine1598
    @diophantine1598 4 หลายเดือนก่อน +3

    The current state of the art NeRFs are actually much better than this. There’s also Gaussian Splatting which is faster to generate, faster to render, and even higher quality. This field of research is very new and exciting.

  • @Tospaa
    @Tospaa 4 หลายเดือนก่อน +4

    I see Dr Mike Pound, I click like. That simple.
    Really good content thank you all!

  • @jacejunk
    @jacejunk 4 หลายเดือนก่อน +3

    Thanks for covering this subject. Could you cover Gaussian Splatting in the future? I think the rendering description would be easier to understand for novices.

  • @TomSnyder--theJaz
    @TomSnyder--theJaz 4 หลายเดือนก่อน +2

    Well done, Lewis
    Cheers
    (Watch out Mike, Lewis a very good presenter ;)

  • @realeques
    @realeques 4 หลายเดือนก่อน

    as a swoftware engeneer im so glad that i can just harvest such knowledge

  • @hieattcatalyst4540
    @hieattcatalyst4540 4 หลายเดือนก่อน +2

    This video on Neural Radiance Fields is mind-blowing! Kudos to Lewis for demystifying the complexities with such clarity. Seriously, I'm hooked! Wondering, though, how these fields might revolutionize CGI or virtual environments? Can't wait to dive deeper into this fascinating realm!

  • @SC-fk6bb
    @SC-fk6bb 4 หลายเดือนก่อน +1

    Best of the video: Dr Mike watching at the student with a suspicious sight 😂😂

  • @manfreddellkrantz9954
    @manfreddellkrantz9954 4 หลายเดือนก่อน +5

    How did you get the camera positions considering he just went around with his phone?

    • @U014B
      @U014B 4 หลายเดือนก่อน

      I have the same question. Maybe it's from accelerometer readings? Don't know how that would work with an actual camera, though.

    • @sotasearcher
      @sotasearcher 4 หลายเดือนก่อน

      Usually the program COLMAP is used to get the camera positions

  • @isaacg1
    @isaacg1 4 หลายเดือนก่อน +2

    Wow perfect timing! Was looking at this earlier. Would love to see that follow up on gaussian splatting

  • @Yupppi
    @Yupppi 4 หลายเดือนก่อน +1

    Well this is an interesting change of pace, having a PhD student explain the supervisor (and the internet) something new and cool.
    Is this video fast forwarded? Something about their movement and pace doesn't feel natural. Somehow 0.75x feels more natural.

  • @CallousCoder
    @CallousCoder 4 หลายเดือนก่อน +2

    This is so much better than the rubber dart variant 😊

  • @ezracramer1370
    @ezracramer1370 4 หลายเดือนก่อน

    Impressive, thank you very much!

  • @Primalmoon
    @Primalmoon 4 หลายเดือนก่อน +1

    What kind of camera or system did Lewis use in order to get such a great position data for each camera image? When I've previously played around with any kind of computer vision from random cameras it feels like I need to get everything precisely measured and any inaccuracies would mess everything up. And cameras with GPS would still be very vulnerable to slight noise. But the NeRF viewer seemed to have all of the images located smoothly and continuously with no massive outliers.

  • @RRobert99
    @RRobert99 4 หลายเดือนก่อน +12

    I wonder if google might use tech like this at some point to increase their 3D coverage on maps. If you put together images from street view and satellite images I imagine you could get a decent enough result to show most places in 3D like they already have for bigger cities.

    • @kushagrano1
      @kushagrano1 4 หลายเดือนก่อน +2

      They already are

    • @alvesvaren
      @alvesvaren 4 หลายเดือนก่อน +2

      At least apple maps already does this. you can move between "frames" in streetview and you can see it try to reconstruct it. It looks really cool

    • @tsunghan_yu
      @tsunghan_yu 28 วันที่ผ่านมา

      They use it for Immersive View.

  • @esbenablack
    @esbenablack 4 หลายเดือนก่อน

    Could it compliment point clouds for things like building scanning, for use in Building Information Modeling (BIM)?

  • @articgadgets
    @articgadgets 4 หลายเดือนก่อน

    I am looking forward to a video on Gaussian Spatting!

  • @UnderstandingCode
    @UnderstandingCode 4 หลายเดือนก่อน

    Yes! Love it

  • @rseichter
    @rseichter 4 หลายเดือนก่อน +1

    Well, this is slightly more advanced than what we were able to do in the 1990s using the "Stuttgart Neural Network Simulator" (SNNS). 🤓

  • @jaffarbh
    @jaffarbh 4 หลายเดือนก่อน +1

    One handy trick is to increase the shutter speed (and then de-noise) to minimise blur.

  • @moffwings
    @moffwings 4 หลายเดือนก่อน

    nerfs have been able to reproduce accurate specular and relighting scenes from any angle. I think a more interesting application would be for encoding high detail models, for artistic purposes or a visualization step in the design process. it's not 100% accurate display, but it's able to capture alot of detail in a large scene which i think is fair trade of for alot of applications. hopefully we can see tweening between nerf lods in the future to fill in the low detail gaps

  • @surferriness
    @surferriness 4 หลายเดือนก่อน

    What a trippy 3D scene, imagine the video games you could make.
    Tris count is probably not so nice

  • @hdaalpo
    @hdaalpo 4 หลายเดือนก่อน

    I thought I recognized this technique! Corridor Crew did a video on this from a VFX perspective. They used what was available a year ago, so it's likely a tad dated. Any plans to try to add real world users as part of the refining process?

  • @bengoodwin2141
    @bengoodwin2141 4 หลายเดือนก่อน

    Would it ever make sense to use this process to generate data, throw away anything beyond some distance of a target object, then use some other system to generate a 3d model?

  • @robchr
    @robchr 4 หลายเดือนก่อน +2

    How is this different from photogrammetry?

  • @haydenwolfe7846
    @haydenwolfe7846 4 หลายเดือนก่อน

    I think I understand. Is it that you are taking a bunch of images, and then doing photogrametric calculations (except with distance unknown) to find relative distances?

  • @maximecourchesne5986
    @maximecourchesne5986 4 หลายเดือนก่อน

    Very cool! Not sure what you meant when you said that your camera can collect data points through the three tho

  • @Ohmriginal722
    @Ohmriginal722 4 หลายเดือนก่อน +1

    That looks like it's using a much older version of the NeRF algorithm, there's a lot of more recent nerf papers with more impressive results which run and train much faster like Instant NGP

  • @arech1778
    @arech1778 3 วันที่ผ่านมา

    I wonder how would it work with 360 cameras as you can compensate for the distortion and have a lot more anchors

  • @kipandcop1
    @kipandcop1 4 หลายเดือนก่อน +2

    Something I;ve always wondered with NeRFs and similar systems is how you know you point clouds out you are using to train from using things like COLMAP are correct and tuned correctly? When running NeRFs and gaussian splatting myself I was very surprised by the fact that the "done thing" is to just put your images through COLMAP to produce the point clouds, and the NeRF and Gaussian Splat part is more of a rendering technique for said point clouds. Is COLMAP the be all and end all of extracting 3D points from RGB images, so it's not something that people care about improving? If it was possible to get pointclouds out of 3d rendering software in the right format (which I assume shouldn't be that difficult?) like blender for a synthetic scene, could you basically get a perfect NeRF of that scene?

    • @ThomasPlaysTheGames
      @ThomasPlaysTheGames 4 หลายเดือนก่อน +2

      For most implementations of NeRF, the use of COLMAP is for the camera-solve to the best of my knowledge, and there are some versions that utilize trained camera positions. Most versions I've dug into don't start with a colmap point cloud.
      That being said, for gaussian splatting you are largely correct, although GS utilizes an extensive amount of optimized densification and as a result depending on the amount of densification occurring you may actually end up with very little of the original point cloud being retained.
      Regarding "Is COLMAP the be all and end all of extracting 3D points from RGB images", there are various methods that produce better results than COLMAP in niche scenarios, but few are as versatile as COLMAP.
      Regarding your last comment, I believe the answer is "mostly yes", in the fact that with a proper dataset of rgbd images you can probably generate a much much better NeRF results. Granted the whole point of NeRF is to perform a "close 'nuff" job, and if you have perfect data I don't understand why you wouldn't go with a photogrammetry approach unless your scene depended on the mirror-obscura done by NeRF or the spherical harmonics done by GS.

    • @Jack-gl2xw
      @Jack-gl2xw 4 หลายเดือนก่อน

      Typically COLMAP is just used to determine the position of the camera in 3d space for each photo. Once each photo is labeled with its position, you can train a NeRF

    • @kipandcop1
      @kipandcop1 4 หลายเดือนก่อน

      @@ThomasPlaysTheGames thanks for the thoughtful response! As someone who's basically blindly followed guides for instant ngp and gaussian splatting, the use of COLMAP has often left me wanting more explanation after the guides will (very fairly) just walk you through the steps to use it but with no explanation. An interesting thing with the synthetic 3D scene is that for the full path tracing of blender cycles, an individual frame of a scene can take many minutes to render, but if course give you "perfect" path tracing. Although it's a very niche use case, I can imagine times where having that scene freely explorable in a NerF or GS would be beneficial. Gaussian splat especially which can be rendered on CPU in webGL at a decent frame rate, meaning you could render 200frames or whatever once, put them through a GS model and have lots of users explore it in decent enough detail on CPU, to their hearts content. Again, quite a niche use case, but I think an interesting use case nonetheless

    • @kipandcop1
      @kipandcop1 4 หลายเดือนก่อน

      @@Jack-gl2xw thanks for the response. Yeah I have been mainly left wondering about it after blindly following tutorials for INSTANTNGP and gaussian splatting, where it is what they walk you through using with no mention of parameters. That's fair enough of course as they are guides for beginners like myself, but yeah has left me wanting more in explanation of why it's used so readily. Additionally to a novice like myself, getting the correct 3D points out of photos seems like a difficult thing to optimise across every scene, and when I'm left with a lackluster gaussian splat or NeRF, I'm often left wondering if it was the model training itself or the COLMAP step that "went wrong"/didn't optimise correctly

    • @Jack-gl2xw
      @Jack-gl2xw 4 หลายเดือนก่อน +2

      @@kipandcop1 You can manually verify the COLMAP outputs in nerfstudio like in the video (as in visually check and make sure they look like they are lined up correctly). If COLMAP messed up badly, it should be obvious in your outputted nerf because the incorrectly positioned image/or images will be floating and wont mesh with the scene. As for general quality, im not sure without seeing your data/results. Try better lighting and more photos. Ive had some great results with NGP and Guassian splats with just my phone and running COLMAP on the images
      Edit: another thing, make sure your images overlap with eachother. This is how COLMAP works. If none of your images cover the same area, COLMAP wont be able to figure out the images positions relative to eachother. I recommend recording a video then running COLMAP in sequential mode

  • @reynoldscem
    @reynoldscem 4 หลายเดือนก่อน +1

    NeRF should be stylised in the title with a lowercase 'e'.

  • @AgentM124
    @AgentM124 4 หลายเดือนก่อน

    Just imagine letting a human look around a scene for only a few minutes and have them 'visualize' it in their head. That would probably "look" somewhat like that shown here.

  • @Iswimandrun
    @Iswimandrun 4 หลายเดือนก่อน

    So you train it probably on GPU. Can you deploy the NERF on a TPU such as Coral or Intel Compute stick?

  • @duytdl
    @duytdl 4 หลายเดือนก่อน

    How does the neural network (alone) figure out the distances etc?

  • @yppahpeek
    @yppahpeek 4 หลายเดือนก่อน +2

    Is this how Google constructs 3D images for Google Earth? I've been wondering that for ages

    • @consolek1d
      @consolek1d 4 หลายเดือนก่อน

      You mean terravision?

    • @quillaja
      @quillaja หลายเดือนก่อน +1

      I'd imagine most of the Google Earth's 3D is from lidar datasets.

    • @tsunghan_yu
      @tsunghan_yu 28 วันที่ผ่านมา +1

      NeRF came out in 2020. So Google Earth used something else. Google does use NeRF for Immersive View in Maps.

  • @aryaamootaghi3248
    @aryaamootaghi3248 4 หลายเดือนก่อน

    based on what was explained it looked to me more like a tomographic ray tracing in the end.
    also , I think it might work only with two camera positions and the rest to be estimated as a part of unknowns in the system of equations🤔?!

  • @GamingShiiep
    @GamingShiiep หลายเดือนก่อน

    I appreciate the the video, but, maybe it's me, i still struggle to understand the "point analogy" shown at around 5:42 (and previously). Because "checking at which distance the observation hits the object" (or similar) implies that you'd be using images with already present depth information and making use of something like focal distance and "out of focus" testing. But it's obviously not, so how does it work then?
    I'm curently starting to read a bit about it for my masters, so I know that there's A LOT of math behind it. However the concepts are even visually hardly ever explained so that it makes actual sense. Maybe in a few weeks I'll come back and I'll understand what you actually try to explain.

  • @vermeul1
    @vermeul1 4 หลายเดือนก่อน

    Great new presenter!

  • @dave8is8beast
    @dave8is8beast 4 หลายเดือนก่อน

    Im curious if 360 degree images would help in providing a better set of images to train off of

    • @andybrice2711
      @andybrice2711 4 หลายเดือนก่อน

      I'd guess probably not, because more pixels of your available resolution will be used up with repetitive images of the environment around your scene, rather than high-resolution data of the objects within it.

  • @aame6643
    @aame6643 4 หลายเดือนก่อน

    I’d love a video on Gaussian Splatting, it’s supposed to be better than NERFs?

  • @YandiBanyu
    @YandiBanyu 4 หลายเดือนก่อน +5

    What is the advantage of this vs traditional photogrammetry?

    • @andybrice2711
      @andybrice2711 4 หลายเดือนก่อน +1

      It's vastly more photoreal by being somewhat "impressionistic" about the details. Converting a scene into a textured mesh is often a complex process for a sub-optimal format. Like trying to recreate a realistic painting in Lego bricks.

    • @YandiBanyu
      @YandiBanyu 4 หลายเดือนก่อน

      ​@@andybrice2711 I see, thanks for the explanation

  • @stevenmathews7621
    @stevenmathews7621 4 หลายเดือนก่อน +1

    Love Dr Pound
    very sweet man
    i'm laughing hysterically at the notion that
    off camera he's like a dictatorial a*hole
    camera back on, back into sweet guy mode
    lol, childish, love it 🤣

  • @ZT1ST
    @ZT1ST 4 หลายเดือนก่อน

    @4:57; The way it's described here, it sounds like "Tracert for Ray Tracing".

  • @georgedyson9754
    @georgedyson9754 4 หลายเดือนก่อน

    Seems a bit like an Xray CT scanner or an MRI tied to a neural network

  • @tsunghan_yu
    @tsunghan_yu 28 วันที่ผ่านมา

    This is a bit too high level for the channel. But I appreciate the demo!

  • @mlguy8376
    @mlguy8376 4 หลายเดือนก่อน +1

    Is Lewis and Mike related - they talk the same with the same mannerisms. Don’t think I started to talk like my own supervisor 😂

  • @iamavataraang
    @iamavataraang 4 หลายเดือนก่อน

    Is this what the PolyCam app uses?

  • @MagruderSpoots
    @MagruderSpoots 4 หลายเดือนก่อน

    What hardware is this running on?

  • @djstr0b3
    @djstr0b3 4 หลายเดือนก่อน +5

    Dr Mike, you should start producing an online course for the AI subjects that you have discussed.

  • @YuTv1408
    @YuTv1408 4 หลายเดือนก่อน

    Diffusion in Materials science and physics is very similar to Cs diffusion

  • @olivermorris4209
    @olivermorris4209 4 หลายเดือนก่อน +1

    No computer desk is complete without a sandwich toaster

    • @michaelwilson5742
      @michaelwilson5742 4 หลายเดือนก่อน

      Yup, he can look forward to a conversation about that in the new year 😀

  • @evarlast
    @evarlast 4 หลายเดือนก่อน

    He starts drawing on tractor green bar paper? Does that stuff even exist anymore?

  • @Roxor128
    @Roxor128 2 หลายเดือนก่อน

    I wonder how well this would work if you fed it a series of 360-degree views from Google Street View going along a road?

    • @monster284
      @monster284 หลายเดือนก่อน

      Google has hinted that they're working on something like this. Their latest NeRF model (SMERF) can scale to arbitrarily large scenes. Would be pretty cool to navigate Street View in a high-res 3D model...

    • @Roxor128
      @Roxor128 หลายเดือนก่อน

      @@monster284 If it works, it'd be really useful for game developers that want to set their games in a real place. Grab the street view imagery for the relevant area and generate a first-draft model to build upon.

  • @becomingdave
    @becomingdave 4 หลายเดือนก่อน

    I'm from south Africa and I m making a living from what I have learnt here

  • @Bluedragon2513
    @Bluedragon2513 4 หลายเดือนก่อน

    This could be one of the quicker ways to create 3D models

  • @MikePaixao
    @MikePaixao 4 หลายเดือนก่อน

    now if you reverse engineer the process, and generate a field of color data based on camera angle some higher dimensional maths, you have forward predictive rendering 😀

  • @biomatrix8154
    @biomatrix8154 4 หลายเดือนก่อน

    To reconstruct bloody crime scenes, I guess they'd use Gaussian splattering.

  • @julianmeredith9168
    @julianmeredith9168 4 หลายเดือนก่อน

    Just gonna through this out there, I have a few reasons why, but Lewis seems very… AI?!

  • @SirKenchalot
    @SirKenchalot 4 หลายเดือนก่อน +1

    9:41 Great attempt at controlling your mouth there bro; remember, it's a family show.

    • @tjsm4455
      @tjsm4455 27 วันที่ผ่านมา

      haha nice catch

  • @Yitzh6k
    @Yitzh6k 4 หลายเดือนก่อน

    Are people that are very capable using this technique called "NERF Guns"?

  • @user-gb3rd6wk7z
    @user-gb3rd6wk7z 4 หลายเดือนก่อน

    The guy on the right looks like Hugh Grant.

  • @borregoayudando1481
    @borregoayudando1481 4 หลายเดือนก่อน

    yeah, it's NERF of nuthin

  • @kbrizy7490
    @kbrizy7490 3 หลายเดือนก่อน

    Doesn’t look bad at all. Looks like good maps in 3d

  • @rudiklein
    @rudiklein 4 หลายเดือนก่อน

    The new SME was great, not nerfous at all.

  • @AdrinaRoM
    @AdrinaRoM 4 หลายเดือนก่อน +1

    It's basically what Mega Scans does for gaming. They scan for example a rock in high detail, you can import that rock into your game engine.

  • @YuTv1408
    @YuTv1408 4 หลายเดือนก่อน

    Nerfs spunds like Nerds...

  • @davidberger5745
    @davidberger5745 4 หลายเดือนก่อน

    please publish videos on fast moving fields quicker, its completely outdated now.

  • @mertakyaz5359
    @mertakyaz5359 4 หลายเดือนก่อน

    Hello Computerphile, I love your content. Can someone please explain what DAG is and how it can disrupt the blockchain tech in a video? It is from Graph Theory

  • @zwanz0r
    @zwanz0r 4 หลายเดือนก่อน +1

    You guys look quite nerfous. Very nerfwracking. Too bad the tree got nerfed.

  • @ankurgajurel02
    @ankurgajurel02 4 หลายเดือนก่อน

    FIRST

    • @Jake28
      @Jake28 4 หลายเดือนก่อน

      WAIT WHAT?? i thought it said 11 months lmao

  • @maxrs07
    @maxrs07 4 หลายเดือนก่อน +3

    Why is this called ML? At this rate anything what computer does should be called ML

    • @ParadiZE3D
      @ParadiZE3D 4 หลายเดือนก่อน +5

      Because the underlying optimization algorithms

    • @Jack-gl2xw
      @Jack-gl2xw 4 หลายเดือนก่อน +1

      A neural network is learning to represent the scene. Machine Learning is broadly described as learning patterns and predictions from data (ie: training). Here, the data is the RGB Photos and it is training a NERF to represent the scene

    • @maxrs07
      @maxrs07 4 หลายเดือนก่อน

      @@Jack-gl2xw its not learning anything its doing what algorithm told it to do and stores it in memory, just like anything else what u do on the computer. Things only should be called ML if they then do the above backwards, but afaik NeRF has no backward algorithm.

    • @sotasearcher
      @sotasearcher 4 หลายเดือนก่อน

      ​@@maxrs07 yes, the original NeRF uses a MLP/multilayer perceptron, which is just a fully connected neural network, and uses a variant of backpropagation. BTW an algorithm doesn't need a backward pass to be ML, it just needs to learn a function, just look at decision trees

    • @Jack-gl2xw
      @Jack-gl2xw 4 หลายเดือนก่อน +1

      @@maxrs07 NERFs still use back the back propagation algorithm. The rendering method of Nerfs is differentiable, thus trainable. I get what youre saying about how it seems like ML is in everything these days, but this is literally a Neural Network.
      For extra info, the Nerf model takes in 4 points (x,y,z, direction) and outputs (r,g,b,alpha, and radiance... i think). From this direct inputs/output of the model you can see how it is ML and how the model is being trained. Check out the full Nerf paper if you are interested

  • @Lion_McLionhead
    @Lion_McLionhead 4 หลายเดือนก่อน

    Still looks like a turd but cheaper than lidar scanning.

  • @infectedrainbow
    @infectedrainbow 4 หลายเดือนก่อน

    impressive ways of generating new views of...what? stupid auto CC :(

  • @infectedrainbow
    @infectedrainbow 4 หลายเดือนก่อน

    I can't find a definition for wasterize.

  • @infectedrainbow
    @infectedrainbow 4 หลายเดือนก่อน

    You should have had the younger guy speak all of the lines. He's understandable.