Thank you for contributing quality work in the area of depth map reconstruction! Quite an improvement from ZoeDepth and Midas! Does it makes sense to have specialized models, i.e. people/nature, streets/architecture? Kind of organic vs geometric. It seems that the universal model struggles with geometrical forms and tends to "curve" them to organic. Another question, is it possible to use your models as the base and train them further on a custom dataset of labeled image-depthmap pairs? Oh, and another one, the stereopairs and multi-views. The multiple images hold the disparity keys, so the depth can be rather "calculated" then "guessed". Is there any work happening in multi-view depth map reconstruction? This would be very useful in computer vision and self driving card.
I am thoroughly impressed with your models, by far the best available right now. Just as feedback, I use them for my 3D conversions, and they have a flaw compared to your "competitors." In close-ups, the actors closer to the camera appear blurred, and the depth prediction is incorrect, placing them much further back. If it weren't for this issue, your model would be incredibly good, capturing practically all the small details of an image impressively. One question: When will the GIANT model be available? I can't wait to try it. Thanks.
@@liheyang8587 Hi, thank you very much for your response. I use the basic models, specifically the LARGE (vitl) model. When the input size is lower, it improves a bit, but it’s still not accurate. Are there other models that have this issue corrected?
@@3dultraenhancer You may try our fine-tuned metric depth models here: huggingface.co/spaces/depth-anything/Depth-Anything-V2/blob/main/metric_depth/README.md, which is more friendly to 3D reconstruction.
Our V2 repository and project page have been recovered: github.com/DepthAnything/Depth-Anything-V2, depth-anything-v2.github.io. Sorry for the inconvenience to you.
Hi, our accounts for the DepthAnything organization and the Depth-Anything-V2 homepage are both suspended. We are still appealing. You may try our demo here: huggingface.co/spaces/depth-anything/Depth-Anything-V2
@@liheyang8587 Thank you for the information! I have already tried the demo, and it is incredible. I would like to know if there is any possibility to access the repository to conduct some additional tests while you solve the issue on GitHub.
Thank you for your interest. Due to the issue with our V2 Github repositories, we temporarily upload all the content to our Huggingface space: huggingface.co/spaces/depth-anything/Depth-Anything-V2/blob/main/README_Github.md.
WOW ! What an improvement! AMAZING!
Thank you for contributing quality work in the area of depth map reconstruction! Quite an improvement from ZoeDepth and Midas!
Does it makes sense to have specialized models, i.e. people/nature, streets/architecture? Kind of organic vs geometric.
It seems that the universal model struggles with geometrical forms and tends to "curve" them to organic.
Another question, is it possible to use your models as the base and train them further on a custom dataset of labeled image-depthmap pairs?
Oh, and another one, the stereopairs and multi-views. The multiple images hold the disparity keys, so the depth can be rather "calculated" then "guessed". Is there any work happening in multi-view depth map reconstruction? This would be very useful in computer vision and self driving card.
What is happening with the windows, will we not have 3D when you look through a window anymore?
Can we have a "Depth - Anything V2 metric - Small " version
We are training smaller metric depth models. We will release them on HF within 5 days. Please stay tuned.
I am thoroughly impressed with your models, by far the best available right now. Just as feedback, I use them for my 3D conversions, and they have a flaw compared to your "competitors." In close-ups, the actors closer to the camera appear blurred, and the depth prediction is incorrect, placing them much further back. If it weren't for this issue, your model would be incredibly good, capturing practically all the small details of an image impressively. One question: When will the GIANT model be available? I can't wait to try it. Thanks.
Hi, could I please know whether you use our basic relative models or the fine-tuned metric models?
@@liheyang8587 Hi, thank you very much for your response. I use the basic models, specifically the LARGE (vitl) model. When the input size is lower, it improves a bit, but it’s still not accurate. Are there other models that have this issue corrected?
@@3dultraenhancer You may try our fine-tuned metric depth models here: huggingface.co/spaces/depth-anything/Depth-Anything-V2/blob/main/metric_depth/README.md, which is more friendly to 3D reconstruction.
Looks great in the video but when I use the demo on an image I don't get very good results. Should I expect better results outside of the demo?
Our V2 repository and project page have been recovered: github.com/DepthAnything/Depth-Anything-V2, depth-anything-v2.github.io. Sorry for the inconvenience to you.
Impresive, but the link to github is broken
Hi, our accounts for the DepthAnything organization and the Depth-Anything-V2 homepage are both suspended. We are still appealing. You may try our demo here: huggingface.co/spaces/depth-anything/Depth-Anything-V2
@@liheyang8587 Thank you for the information! I have already tried the demo, and it is incredible. I would like to know if there is any possibility to access the repository to conduct some additional tests while you solve the issue on GitHub.
Sure. We will try to recover the repo as soon as possible.
Thank you for your interest. Due to the issue with our V2 Github repositories, we temporarily upload all the content to our Huggingface space: huggingface.co/spaces/depth-anything/Depth-Anything-V2/blob/main/README_Github.md.