You're an absolute beast. I've been needing something like this for a while now, I didn't know something like this existed. I just bought your pack, gonna give it a go for this next video, thanks for all your work you put into this!
I did this effect only inside Blender 3d, more faster on my old pc. then import it in davinci resolve. But by knowing this lesson is a plus knowledge. thanks jake.
Great question! There are a few main ones. First off is control, Wipp3DCamera has the easy on screen controls where the MotionVFX tool just uses the inspector. Second mine uses the 3D system where mCamRig uses the DVE node which as I pointed out can be a little clumsy. Finally my tool has realistic depth of field. Since MotionVFX's tool isn't actually 3D I believe their DoF is just a mask with a blur node. I could be wrong on the specifics of that but it doesn't have the correct perspective since it is entirely 2D. Hope that answers the question!
Unfortunately when I use Wipp3Dcamera it keeps causing davinci resolve to crash in a weird way. The timeline becomes unresponsive, even closing davinci and opening the project again doesn't help, it's as if the whole project is permanently corrupted. Have you encountered this issue before? Keep up the great work! :)
@@JakeWipp Amazing thank you! This really is amazing but feels like it's missing one axis to make it perfect. With the X & Y Pivot and Position controls, I feel like I need one more maybe Z pivot? to actually angle the screen the way I want it. Is that possible? Seems like I can only tilt forward and back, and rotate clockwise or counterclockwise, but I can't spin if that makes sense.
Help! the camera doesn't work. I following all the step, everything is connected but the camera in fusion doesn't follow the imagen or video. Do I need activate something else?. I have de studio version.
Yeah you can definitely get that to work. I just find them to be really inaccurate when the depth of field is calculated after the fact (like Davinci's Depth Tool). Plus they are an extra step if you use an external tool. If you have an AI tool that works well let me know, I'd love to look into it!
@@EbolaStew Any time you deal with depth, you want a depth map. InstaMAT is good for creating that.. It’s more suited to 3D programs like blender and that’s where I use it but depth is a 3D thing so it handles that well.
@@ATLJB86 Help me catch up with your line of thinking: First of all, InstaMAT just arrived on my radar and it seemed to be pitched as a Substance Designer/Painter alternative. I don't see where it would involve generating a Z-buffer for a 3D scene. I'd like to see where it tackles that. Also, like @JakeWipp said, an AI Z depth map generated after the fact can be dicey. I'm super impressed with Photoshop's Neural filter but it is hardly bullet proof in the depth map it produces and results can vary. Better would be getting the z pass from the 3D scene if you can, which should always be more accurate than an AI's guess at it. Also, while I love the speed and resulting look of the Photoshop Neural filter compared to doing the same thing in render (with Arnold for instance), I haven't yet seen it in any plugin to apply to moving footage. It seems to me that if you have a real 3D scene-based Z Depth to combine with whatever that Neural AI filter is doing, you would have a winner. Maybe that's what you are saying. AI for the depth of field effect but an actual Depth map for the input. Have you seen that for a compositing app? I have, so far, found any of the Depth of Field solutions available within Fusion (or After Effects) even if you have a good Z Depth map, to be very disappointing in quality. I have not tried Frischlufft because it costs money but I hear it is good. Anyway. Interesting discussion. I, personally, think that we have to appreciate what @JakeWipp has done here. It looks not only much faster than the built-in ways but better quality.
Invest in EditorCollection today: wipptemplates.com/product/editorcollection/
Please let me know if you have any questions about this effect!
You're an absolute beast. I've been needing something like this for a while now, I didn't know something like this existed. I just bought your pack, gonna give it a go for this next video, thanks for all your work you put into this!
I did this effect only inside Blender 3d, more faster on my old pc. then import it in davinci resolve.
But by knowing this lesson is a plus knowledge. thanks jake.
This is straight up very impressive!
Great effect, Jake. Thank you.
thank you!!!!
Going to buy these plugins ASAP, cheers bro!
Whats the main few differences between this and McamRig from motionvfx?
Great question! There are a few main ones.
First off is control, Wipp3DCamera has the easy on screen controls where the MotionVFX tool just uses the inspector.
Second mine uses the 3D system where mCamRig uses the DVE node which as I pointed out can be a little clumsy.
Finally my tool has realistic depth of field. Since MotionVFX's tool isn't actually 3D I believe their DoF is just a mask with a blur node. I could be wrong on the specifics of that but it doesn't have the correct perspective since it is entirely 2D.
Hope that answers the question!
Unfortunately when I use Wipp3Dcamera it keeps causing davinci resolve to crash in a weird way. The timeline becomes unresponsive, even closing davinci and opening the project again doesn't help, it's as if the whole project is permanently corrupted. Have you encountered this issue before? Keep up the great work! :)
@@ryan8992 hey! Sorry to hear about the issues. It's definitely not the normal behavior. Can you reach out via my website for easier troubleshooting?
@@JakeWipp thanks for the quick response, i'll drop you a message on your website!
Is there a way to make the camera moves last longer than 100 frames? Would be awesome if the start and end points scaled to the bounds of the clip.
Pressing "Continuous" mode option under the ANIM ENGINE just sets the clip to the End Position and doesn't animate. Any ideas?
@@sharchik916 yes! If you type a number into the box it will allow you to set it higher than 100 frames!
@@JakeWipp Amazing thank you!
This really is amazing but feels like it's missing one axis to make it perfect. With the X & Y Pivot and Position controls, I feel like I need one more maybe Z pivot? to actually angle the screen the way I want it. Is that possible? Seems like I can only tilt forward and back, and rotate clockwise or counterclockwise, but I can't spin if that makes sense.
2:40 Thanks
Help! the camera doesn't work. I following all the step, everything is connected but the camera in fusion doesn't follow the imagen or video. Do I need activate something else?. I have de studio version.
I found the issue. The note Renderer3D1 wasn't connected to the MediaOut. I have to moved it around, it was hidden.
Hey glad to hear you got it figured out!
сПасиБо))))
Why not DVE node?
I explain that in the video
👍👍
Talking WAY too fast!
The best way to create field of depth is the create a depth map.. You can do that manually in resolve but I would suggest using an ai tool.
Yeah you can definitely get that to work. I just find them to be really inaccurate when the depth of field is calculated after the fact (like Davinci's Depth Tool). Plus they are an extra step if you use an external tool. If you have an AI tool that works well let me know, I'd love to look into it!
Why is that the best way? What ai tool?
@@EbolaStew Any time you deal with depth, you want a depth map. InstaMAT is good for creating that.. It’s more suited to 3D programs like blender and that’s where I use it but depth is a 3D thing so it handles that well.
@@ATLJB86 Help me catch up with your line of thinking: First of all, InstaMAT just arrived on my radar and it seemed to be pitched as a Substance Designer/Painter alternative. I don't see where it would involve generating a Z-buffer for a 3D scene. I'd like to see where it tackles that. Also, like @JakeWipp said, an AI Z depth map generated after the fact can be dicey. I'm super impressed with Photoshop's Neural filter but it is hardly bullet proof in the depth map it produces and results can vary. Better would be getting the z pass from the 3D scene if you can, which should always be more accurate than an AI's guess at it.
Also, while I love the speed and resulting look of the Photoshop Neural filter compared to doing the same thing in render (with Arnold for instance), I haven't yet seen it in any plugin to apply to moving footage. It seems to me that if you have a real 3D scene-based Z Depth to combine with whatever that Neural AI filter is doing, you would have a winner. Maybe that's what you are saying. AI for the depth of field effect but an actual Depth map for the input. Have you seen that for a compositing app? I have, so far, found any of the Depth of Field solutions available within Fusion (or After Effects) even if you have a good Z Depth map, to be very disappointing in quality. I have not tried Frischlufft because it costs money but I hear it is good.
Anyway. Interesting discussion. I, personally, think that we have to appreciate what @JakeWipp has done here. It looks not only much faster than the built-in ways but better quality.