Image to Mesh using ComfyUI + Texture Projector

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 มิ.ย. 2024
  • In today's video, we will talk about image-to-mesh workflow. Including 3D reconstruction from a simple image or multiple images.
    00:00 Introduction
    00:46 ComfyUI Layer Diffuse
    01:52 3D reconstruction solutions
    04:54 CRM introduction
    05:52 CRM diagram
    07:19 ComfyUI 3D Pack
    07:41 CRM Image to Mesh workflow in ComfyUI
    12:26 Wonder 3D Image to Mesh workflow in ComfyUI
    13:24 Import CRM mesh in 3D Max
    14:42 Mesh comparison of CRM, TripoSR, Wonder3D+NeuS
    15:38 Mesh optimization
    16:56 Comparison of retopology and optimization process
    22:02 Zbruch Z-Remesher
    24:13 UV
    24:52 Create outline texture for mesh using Texture Projector in UE
    27:22 Texture refinement
    29:33 Project textures to mesh using Texture Projector in UE
    30:59 Bake texture using Property Baker in UE
    32:06 Single image to mesh final results
    32:48 Why the reference image must be close to the frontal view
    34:43 Use depth Control Net to control the view angle
    37:56 Multi-view images to mesh workflow in ComfyUI
    42:18 Gaussian Splatting + DMTet
    43:02 Restriction of Multi-view images to mesh workflow in ComfyUI
    44:04 Summary
    Music: Sunny Skies (by Suno)
    Create various textures using Texture Projector and Stable Diffusion
    • Create various texture...
    -----------------------------------
    MARS Texture Projector:
    www.unrealengine.com/marketpl...
    MARS Property Baker:
    www.unrealengine.com/marketpl...
    MARS Master Material:
    www.unrealengine.com/marketpl...
    -----------------------------------
    Houdini Lego Mesh
    • Legoize geometry & RBD...
    • brickini - procedural ...
    • Procedural Lego Bricks...
    Gaussian Splatting
    • 3D Gaussian Splatting ...
    • Photogrammetry / NeRF ...
    • What is 3D Gaussian Sp...
    • Step-by-Step Unreal En...
    • Gaussian Splatting exp...
    -----------------------------------
    #imagetomesh #3dreconstruction #sv3d #triposr #unreal #textureprojector #stablediffusion #comfyui
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 74

  • @michaelmurrillus915
    @michaelmurrillus915 หลายเดือนก่อน +2

    methodical presentation. Very well done

  • @soma78
    @soma78 2 หลายเดือนก่อน +2

    impressive. the amount of work you put in this video...well done. subscribed.

  • @vivigomez5960
    @vivigomez5960 หลายเดือนก่อน

    Beautiful!! Great video. A lot of work and time in this great explanation.

  • @EqualToBen
    @EqualToBen 2 หลายเดือนก่อน +4

    Awesome topology comparison! This video is gold

  • @Meteotrance
    @Meteotrance 24 วันที่ผ่านมา

    They could use metaball instead of voxel for the mesh cloud point recreation, it's super light and fast for generated volume, blender handle the metaball convertion to polygon very well...

  • @artmosphereID
    @artmosphereID 2 หลายเดือนก่อน +2

    good for hard surface/static/prop assets. For organic and animated models, a big no no, will be a nightmare for animator

  • @brianmcquain3384
    @brianmcquain3384 2 หลายเดือนก่อน

    cool song, totally unexpected out of the blue!

  • @MaxSMoke777
    @MaxSMoke777 2 หลายเดือนก่อน +4

    You could do all of those dozens of steps... OR... just use the front and side images for reference and simply build the model like you would any other. You've put so much work into saving time, you've definitely made it harder.

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน

      You are right. AI has caused me a lot of confusing behaviors and complicated simple problems.

    • @IS0JLantis
      @IS0JLantis 2 หลายเดือนก่อน +6

      no, Its like spending 5 hours to write a script that will automate a task that only takes 30 min do. it is not intended for single use. once you find a reliable workflow leveraging AI, productivity will skyrocket, old modelling techniques will simply not be able to keep up.
      we need tests like these to learn from.

  • @PixelPoetryxIA
    @PixelPoetryxIA 6 วันที่ผ่านมา

    Hi, where can I find the JK_workflow?
    Thanks in advance

    • @kefuchai5995
      @kefuchai5995  5 วันที่ผ่านมา +1

      I'm preparing to upload the custom nodes and workflows to Git Hub and Civitai.

  • @teambellavsteamalice
    @teambellavsteamalice 2 หลายเดือนก่อน +1

    I have a feeling this has way more potential.
    Is there any way to deconstruct the image into parts, then compare these parts to a set of variants, pick the closest and construct a composition of these? Like a reference model to help the process?
    I imagine you'd need a few basic head shapes, ears, chins, eyebrows and perhaps even hairdos. Then have sets of images (angles or a lora model?) for archetype heads (or complete bodies).
    Like a base bland model, one with extreme elvish ears, one with a pronounced chin, one with exaggerated brows, etc. Then you'd need to reconstruct the mix you want from these archetype sets to get the right mix. I'm not sure you can use interpolation that easily (iirc ControlNet had options?) but if you use the same process on each image of these consistent sets, the resulting set should be consistent too, right?
    Then if each archetype has a nicely fixed 3d model, you could also generate one for the mixed composition.
    Would such a process to create an approximation or base model be doable?
    Could you use this and the actual image (in iterations?) to create a consistent 3D model without any manual fixes?

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน +1

      Great idea! That will be the next generation of AI 3D Mesh. SD AI should learn this.

  • @burakgurses9287
    @burakgurses9287 5 วันที่ผ่านมา

    Can we use our multiview images intead creating multiview images with ai?

    • @kefuchai5995
      @kefuchai5995  5 วันที่ผ่านมา +1

      Yes, we can. Multi-views creation and mesh generation are separate processes in image-to-mesh. It is possible to use your own drawn multi-views to generate models. Additionally, The multi-views creation results can be used as a reference for your drawing.

    • @burakgurses9287
      @burakgurses9287 5 วันที่ผ่านมา

      @@kefuchai5995 Thank you! I will share my results here.

  • @Rahviel80
    @Rahviel80 2 หลายเดือนก่อน +4

    Baked light and no pbr textures are showstopper for game dev, they also have this AI look and unoptimised mesh. Thats far from useful.

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน +1

      Some checkpoints can generate diffuse (albedo) texture using a light environment prompt like soft ambient light. Then generate PBR textures based on the diffuse texture.

  • @MrGATOR1980
    @MrGATOR1980 2 หลายเดือนก่อน +3

    tbh i would faster sculpt and paint this myself than all that shenaningans

  • @RoN43wwq
    @RoN43wwq 2 หลายเดือนก่อน +2

    nice. Thanks

  • @Zamundani
    @Zamundani 2 หลายเดือนก่อน +7

    basically its an over glorified base mesh.

  • @cj5787
    @cj5787 2 หลายเดือนก่อน +3

    "looks cool and effective" for the untrained eye.. in reality this is like 20 times more complicated and time consuming than a regular 3d workflow getting a result that it's not even usable...

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน +1

      It's because we're used to the old workflow.

  • @catparadise950
    @catparadise950 2 หลายเดือนก่อน

    能分享一下怎麼安裝ComfyUI 的3D pack 嗎?我有嘗試過裡面其他模型的custom node 不過這個整合包我就是不太清楚怎麼安裝,我有使用python 虛擬環境。

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน

      可以先看看这个,之前列的注意事项:
      www.bilibili.com/read/cv33521683

  • @linnkoln11
    @linnkoln11 หลายเดือนก่อน

    Hey! about the img2img support for layer diffusion. You need to make background 50% grey. For me it was doing a work! By the way i am not yet done with a video but what i've seen so far is awesome!

    • @kefuchai5995
      @kefuchai5995  หลายเดือนก่อน

      Wow! Great. Thanks for sharing.

  • @masterkarlzon
    @masterkarlzon 2 หลายเดือนก่อน

    Really cool!

  • @mikerhinos
    @mikerhinos 2 หลายเดือนก่อน

    Personnally I'm getting this error when trying to run the CRM to multiview to CCM example workflow (it happens at the mesh construction node which is quite frustrating because the different images are looking good) :
    "RuntimeError: Error building extension 'nvdiffrast_plugin': ninja: error: build.ninja:3: lexing error
    nvcc = D:\pinokio\bin\miniconda\bin
    vcc.exe"
    It may be a path problem I guess, but can't find how to resolve it yet :(

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน

      It is an installation problem with VS or CUDA. Maybe it is the CUDA path?
      github.com/MrForExample/ComfyUI-3D-Pack?tab=readme-ov-file#install

  • @samwalker4442
    @samwalker4442 2 หลายเดือนก่อน

    THANK YOU!

  • @USBEN.
    @USBEN. 2 หลายเดือนก่อน +1

    Now just have to automate all this.

  • @Keji839
    @Keji839 2 หลายเดือนก่อน

    This would be good for hard surface objects. Organic is a no go

  • @shiccup
    @shiccup 2 หลายเดือนก่อน

    sick

  • @arberstudio
    @arberstudio 2 หลายเดือนก่อน +5

    or you can learn to 3D model lol

    • @MaxSMoke777
      @MaxSMoke777 2 หลายเดือนก่อน

      Yah!

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน

      yeah

    • @jonmichaelgalindo
      @jonmichaelgalindo 2 หลายเดือนก่อน

      "And then adjusted" AKA modeling cuz AI can't 3d.

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน

      @@jonmichaelgalindo 😂

    • @generichuman_
      @generichuman_ 2 หลายเดือนก่อน +4

      The last words of someone about to lose their job to new tech. Adapt or get left behind...

  • @piotrek7633
    @piotrek7633 2 หลายเดือนก่อน +2

    Even if ai makes you witcher 3 quality models in the future from thin air, there's no sense of fulfillment from that. Ai is taking our jobs and ways to have fun while at it. If ai makes making games ridiculously easy then game dev will be even more competetive than it already is today

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน

      I would rather think of AI as an assistant.

    • @piotrek7633
      @piotrek7633 2 หลายเดือนก่อน

      @@kefuchai5995 Yeah but for how long though. Artists are already taking a hit because midjourney is literally better quality than most of them, and it can pick up styles of top artists. Ai generated ads are already popping up so thats less work for employees. So when will it hit 3d modeling, game dev as a whole is my question, since cleary it's going in that direction looking at meshy, and altman tweeted 2 days ago that "movies are going to become video games and video games are going to become something unimaginably better". Doesn't this mean trouble for game dev's? If we can't do something 'unimaginably better' now in big teams like cd projekt red or rockstar, so what does he mean? Ai generation of course, if he's not yapping and its true that his team is cooking something hot for games, then oh lord have mercy, i will have 0 fun in life if they take game dev, although ai generated movies that let you interact won't affect the video game market though since it will be like traditional art and digital art, people will want to consume both probably

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน +1

      @@piotrek7633 For me, I won't think about it for now. I will keep following and looking forward that day when everyone can make games. I hope what altman says is true, not just imagined but experienced.

    • @vivigomez5960
      @vivigomez5960 หลายเดือนก่อน

      Your comment seems more typical of a person from the 15th century in front of the printing press.

  • @user-li7ce3fc3z
    @user-li7ce3fc3z หลายเดือนก่อน

    Куча сил и денег потрачено на програмное обеспечение а на выходе полный шлак, куда проще создать все с нуля а рисунки аи использовать как реф

  • @scrutch666
    @scrutch666 2 หลายเดือนก่อน +12

    Any medicore artist would have completed that task in half the time and successfully. This is not even a mesh you could use in game production for a human character. Would cost more time to fix it over doing it yourself by hand. Nobody would hire you with such topology

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน +1

      yes, that's why it still needs re-topo and re-mesh.

    • @AIJOBSFORTHEFUTURE
      @AIJOBSFORTHEFUTURE 2 หลายเดือนก่อน +3

      @scrutch666 Do not fear the death of an industry, Celebrate the birth of a new reality….or cope

    • @2slick4u.
      @2slick4u. 2 หลายเดือนก่อน +4

      That's where UE5.3 comes in clutch with their new nanite skeletal.mesh.
      It's the future and it's gonna hit you hard in the tool box

    • @TheSleepfight
      @TheSleepfight 29 วันที่ผ่านมา

      ⁠@@2slick4u.scrutch is correct..I’m a Principal Character and Hard surface artist with 18 years in the industry and over 15 shipped titles..What do you think nanite will change?

    • @2slick4u.
      @2slick4u. 29 วันที่ผ่านมา

      @@TheSleepfight with the bandwidth and internal storage constantly increasing its becoming realistic to dump high poly and unoptimized models.UE5 almost completely remove the relevance of retopo

  • @Mr3Dmutt
    @Mr3Dmutt หลายเดือนก่อน +2

    Alright... watched the whole video and can confidently say this is useless in any level of production, indie or big budget, also, if youre going to invest that much time into something, why not have fun and sculpt and paint it?

    • @kefuchai5995
      @kefuchai5995  หลายเดือนก่อน

      That's right, I only use this when sculpting becomes boring sometimes.

    • @DimensionDoorTeam
      @DimensionDoorTeam หลายเดือนก่อน

      "I only use this when Sculpting becomes boring sometimes"🤞Technically, that's the best part of creating a character.

  • @AX-032
    @AX-032 หลายเดือนก่อน

    Can u share your json file please?

    • @kefuchai5995
      @kefuchai5995  หลายเดือนก่อน

      The ComfyUI workflow? It is from 3D Pack with some customization. github.com/MrForExample/ComfyUI-3D-Pack/tree/main/_Example_Workflows

  • @GradeMADE
    @GradeMADE 2 หลายเดือนก่อน

    Hey Bro do you the workflow you created for us to use ?

    • @kefuchai5995
      @kefuchai5995  2 หลายเดือนก่อน +1

      The workflow I used is copied from the example of ComfyUI 3D Pack,.
      github.com/MrForExample/ComfyUI-3D-Pack/tree/main/_Example_Workflows

    • @samsilva7209
      @samsilva7209 2 หลายเดือนก่อน

      @@kefuchai5995 when I open "Multi-View-Images_to_Instant-NGP_to_3DMesh" workflow, for example, even if I install the missing nodes in the manage panel, there are still many nodes with this message: hhen loading the graph, the following node types were not found:
      [Comfy3D] Preview 3DMesh 🔗
      [Comfy3D] Gaussian Splatting Orbit Renderer 🔗
      [Comfy3D] Stack Orbit Camera Poses 🔗
      [Comfy3D] Switch 3DGS Axis 🔗
      [Comfy3D] Load 3DGS 🔗
      [Comfy3D] Save 3D Mesh 🔗
      [Comfy3D] Instant NGP 🔗
      [Comfy3D] Fitting Mesh With Multiview Images 🔗
      Nodes that have failed to load will show as red on the graph.
      Do you have any idea what I might be doing wrong? or not doing?
      Thank u in advance

    • @GradeMADE
      @GradeMADE 2 หลายเดือนก่อน

      @@kefuchai5995 Ty Bruv

    • @user-bl8lb7yy1l
      @user-bl8lb7yy1l หลายเดือนก่อน

      @@kefuchai5995 Hey bro.The example workflow of CRM does not include the upscale-img part.I tried to load an upscale model,but it seems have some errors in python--“Input type (struct c10::Half) and bias type (float) should be the same” Could you help with this?Deep thanks.

    • @kefuchai5995
      @kefuchai5995  หลายเดือนก่อน

      @@user-bl8lb7yy1l th-cam.com/video/Y6-JGi_ksos/w-d-xo.htmlsi=1qBaFHsGtPETxU87&t=611
      Check out the video from this time. The format of CRM-generated images requires manual conversion before they can be used for upscale.