Hi, great work! From the video of the project page, I observe that the background keeps jittering (moving a little) even if the view only changes a little. Does this problem exist for nerfies? How do you check it doesn't overfit? From the paper I suppose the loss is only rgb render loss, since I do not see any regularization of these two, I'm afraid that they might do whatever they want. Take a completely static scene for example, how do you prevent the model from learning topological change then deform it back?
I bet this would be fixed if using multiple real cameras. I think what happening is the space you are seeing is moving because it's a remembered location not a visible one at that time.
@@benbionic Hi, yes multiple cameras definitely help, but I think that setup is too strong and is not what this paper aims at. In my opinion just adding a few regularization losses could largely alleviate this problem, I just wonder why the authors don't mention them in the paper (or even don't add them). "is moving because it's a remembered location not a visible one" I think this is not what is happening. You can see the coffee example (a cup pouring coffee into another cup), the background moves A LOT and there is no reason this space is even invisible at ANY time.
Honestly the coolest thing i've seen on this here internet all year! Incredible!
Incredible!
Incredible!!
Hi, great work! From the video of the project page, I observe that the background keeps jittering (moving a little) even if the view only changes a little. Does this problem exist for nerfies? How do you check it doesn't overfit? From the paper I suppose the loss is only rgb render loss, since I do not see any regularization of these two, I'm afraid that they might do whatever they want. Take a completely static scene for example, how do you prevent the model from learning topological change then deform it back?
I bet this would be fixed if using multiple real cameras. I think what happening is the space you are seeing is moving because it's a remembered location not a visible one at that time.
@@benbionic Hi, yes multiple cameras definitely help, but I think that setup is too strong and is not what this paper aims at. In my opinion just adding a few regularization losses could largely alleviate this problem, I just wonder why the authors don't mention them in the paper (or even don't add them).
"is moving because it's a remembered location not a visible one" I think this is not what is happening. You can see the coffee example (a cup pouring coffee into another cup), the background moves A LOT and there is no reason this space is even invisible at ANY time.
Love the project. I wish more interesting data were used to show the results instead of people making weird faces
Please release the code with guide :)
Can we use your software to render 360 video from our custom photogrammetry drone?
Code please. With good help how to use it on personal PC having GPU and python installed :)
I'd love to mess around with this and see how much deformation I can get away with. Please keep my posted when the code is available!