Matt Hallett Visual
Matt Hallett Visual
  • 18
  • 92 286
How I Quickly Created This Stunning Image in 3ds Max Using TyDiffusion
Discover how to quickly create stunning images using TyDiffusion inside 3ds Max. In this video, I’ll walk you through a real example, showing just how simple your scene can be while still achieving beautiful, accurate results in minutes.
# Links from the Video #
docs.tyflow.com/tyflow_AI/tyDiffusion/
github.com/LykosAI/StabilityMatrix
civitai.com/models/866644/greece-seaside-from-above
# Contact Links #
Website: hallettvisual.com/
Website AI for Architecture: www.hallett-ai.com/
Instagram: hallettvisual
Facebook: hallettvisual
Linkedin: www.linkedin.com/in/matthew-hallett-041a3881
มุมมอง: 1 737

วีดีโอ

A Quick 5-Minute Introduction: How I Turn Day into Night Using AI
มุมมอง 2K2 หลายเดือนก่อน
You may have seen my post where I transformed a low-resolution image of London’s 30 St Mary Axe (The Gherkin) into a stunning night photo. The heavy lifting is done by the SDXL LoRA model I’m offering. In this quick 5-minute introduction, I’ll show you how to test out the process yourself before committing to any purchase. # Links from the Video # civitai.com/models/119229/zavychromaxl github.c...
WWII Movie Trailer Made with Various AI Tools
มุมมอง 1.2K3 หลายเดือนก่อน
A faux WWII movie trailer made with Images Generated with Stable Diffusion, Flux, Trained Flux LoRA on 100 images from WWII movies for Mood and Era Specific Details. Image to Video made with Runway Gen 3 and Local Stable Video Diffusion. These clips are very cherry picked. Even Runway has difficulty with this type of action, and Kling produced nothing usable, all results had modern artifacts. P...
Multi-Camera Texture Baking with TyDiffusion
มุมมอง 3.4K4 หลายเดือนก่อน
Learn how to master texture baking in TyDiffusion for 3ds Max. I’ll show you how to use a powerful modifier to unwrap and project multiple generations onto a single texture, ensuring seamless coverage across your entire object. # Links from the Video # docs.tyflow.com/tyflow_AI/tyDiffusion/ github.com/LykosAI/StabilityMatrix # Contact Links # Website: hallettvisual.com/ Website AI for Architect...
Practical Introduction for TyDiffusion
มุมมอง 8K5 หลายเดือนก่อน
TyDiffusion is an implementation of Stable Diffusion in 3ds Max. In this video I'll show you theory and help you understand how Stable Diffusion works in a practical, every day sense with real world examples. # Links from the Video # docs.tyflow.com/tyflow_AI/tyDiffusion/ # Contact Links # Website: hallettvisual.com/ Website AI for Architecture: www.hallett-ai.com/ Instagram: hall...
The Easiest Installer and Manager for All Things Stable Diffusion
มุมมอง 1.6K6 หลายเดือนก่อน
Stability Matrix makes installing and managing your various Stable Diffusion apps super easy, and allows you to use all your models from a single directory. I'll show you my suggested settings and how to get started in this quick video. # Links from the Video # github.com/LykosAI/StabilityMatrix # Contact Links # Website: hallettvisual.com/ Website AI for Architecture: www.hallett-ai.com/ Insta...
Achieving Hyper-Realistic Product Renderings in 4K Detail with AI
มุมมอง 1K9 หลายเดือนก่อน
Transforming a product background and adding shadows is easy. Lets elevate the basics with professional techniques, adding dynamic lighting, highlights, and shadows. Your client's Product should look like it was PHOTOGRAPHED in space. I'll show you how in this video. If you need Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-started-free # Links f...
Accurate Variations using Z-Depth Element and Stable Diffusion
มุมมอง 4.6K10 หลายเดือนก่อน
Skip the preprocessor and use a perfect Z-depth map from your rendering elements. This method works with any rendering engine, is faster, and provides much more accurate results. If you need Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-started-free # Links from the Video # Checkpoint Model: civitai.com/models/140737/albedobase-xl Collection of S...
Turn 3D Characters Realistic with One Click in Automatic1111
มุมมอง 3.2K11 หลายเดือนก่อน
I'll show you the settings and extension required to make your rendered 3D characters realistic with Stable Diffusion and Automatic1111. If you need Automatic1111, please check my video and written descriptions here:www.hallett-ai.com/getting-started-free # Links from the Video # Checkpoint Model: civitai.com/models/132632/epicphotogasm Upscale Model Database: openmodeldb.info/ # Personal Links...
Upscale and Enhance with ADDED DETAIL to 4K + (Better than Topaz)
มุมมอง 17K11 หลายเดือนก่อน
Similar to Krea and Magnific but offline using Stable Diffusion. Just follow these steps and enhance a low resolution image better than you ever thought possible. If you need to install Controlnet and Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-started-free # Links from the Video # Checkpoint Model: civitai.com/models/132632/epicphotogasm Upsca...
Cinematic Text to Video with Stable Diffusion in 2K
มุมมอง 2.6Kปีที่แล้ว
Over 1 Minute of the highest quality text to video animations. A random selection of whats possible to generate locally on a RTX 4090
Fooocus is the Stable Diffusion's Answer to Midjourney | Now with 13 Subtitles Languages
มุมมอง 9Kปีที่แล้ว
Fooocus is free, open source, and incredible fast and easy to use. I'll show you how to install, configure model paths, and get started using Fooocus MRE in your architectural workflow. # Links from the Video # Website and Shop: hallettvisual.com/downloads Install Fooocus: github.com/MoonRide303/Fooocus-MRE # Personal Links # Website: hallettvisual.com/ Instagram: hallettvisual F...
Fooocus es la respuesta de Stable Diffusion a Midjourney [Español]
มุมมอง 459ปีที่แล้ว
Fooocus es gratuito, de código abierto e increíblemente rápido y fácil de usar. Le mostraré cómo instalar, configurar rutas de modelo y comenzar a usar Fooocus MRE en su flujo de trabajo arquitectónico. # Enlaces del vídeo # Sitio web y tienda: hallettvisual.com/downloads Instale Fooocus: github.com/MoonRide303/Fooocus-MRE # Enlaces personales # Sitio web: hallettvisual.com/ Instagram: instagra...
Introduction to ComfyUI for Architecture | The Node Based Alternative to Automatic1111
มุมมอง 26Kปีที่แล้ว
ComfyUI is free, open source, and offers more customization over Stable Diffusion Automatic1111. Now with Subtitles in 13 Languages # Links from the Video # Website and Shop: hallettvisual.com/downloads Install ComfyUI: github.com/comfyanonymous/ComfyUI Comfy UI Manager: github.com/ltdrdata/ComfyUI-Manager Git For Windows: gitforwindows.org/ # Personal Links # Website: hallettvisual.com/ Instag...
UPDATED: Getting Started with AI for Architecture
มุมมอง 2Kปีที่แล้ว
This updated video will show you an easier method to install Stable Diffusion, the free, open-source software we use in our Architecture Visualization workflow. It also covers the theory of generative images, best practices, ControlNet, the Photoshop plugin, and how to find a good generative source model! Check out my website for a detailed walkthrough with links: hallettvisual.com/aistartup I ...
Image to 3D Mesh Tutorial
มุมมอง 1.4Kปีที่แล้ว
Image to 3D Mesh Tutorial
Mountain Lake House Stable Diffusion Animation
มุมมอง 461ปีที่แล้ว
Mountain Lake House Stable Diffusion Animation
Getting Started with Stable Diffusion AI for Architecture
มุมมอง 6Kปีที่แล้ว
Getting Started with Stable Diffusion AI for Architecture

ความคิดเห็น

  • @MrCorris
    @MrCorris วันที่ผ่านมา

    Would love a comfyUI tutorial for this. I intend to buy your AI course over xmas

    • @matthallett4126
      @matthallett4126 18 ชั่วโมงที่ผ่านมา

      Hey, This is Matt, but using my personal account. I don't cover much Comfy, and even after all this time, I still don't use it if I can accomplish the same thing with Forge. I've made the most insane Comfy workflows, and when I go to revisit them I can never remember all the little adjustments. You can easily enough replicate this with Comfy, you just have to install the Ultimate Upscaler node, and there's a few workflows out there for free that do the same thing I present. You can also email me and I'll send you my Comfy workflow collection.

    • @MrCorris
      @MrCorris 10 ชั่วโมงที่ผ่านมา

      @matthallett4126 Cheer's Matt. I work in house at an arch practice as a visualiser and they have pushed comfy UI (we had to license it) so no access to automatic1111. I think it would be much simpler to use that instead,but no one listens to me, anyway cheer's for the reply. I'll reach out over Christmas via email 👍

  • @imanv8355
    @imanv8355 15 วันที่ผ่านมา

    Not sure what I'm doing wrong but I keep getting an error on the LeRes saying: Comfy\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\lllyasviel\\Annotators\\.cache\\huggingface\\download\\latest_net_G.pth.50ec735d74ed6499562d898f41b49343e521808b8dae589aa3c2f5c9ac9f7462.incomplete' Any help would be appreciated.

  • @hansukkim2732
    @hansukkim2732 28 วันที่ผ่านมา

    it is awesome!!!

    • @matthallettai
      @matthallettai 26 วันที่ผ่านมา

      Thank you! I'm going to make a new video showing my custom night conversion script if you want to subscribe and see then when its ready. It will be free to use.

    • @hansukkim2732
      @hansukkim2732 24 วันที่ผ่านมา

      ​@@matthallettai I was wondering how you learned loRa. Do you make a video of this as well?

    • @matthallettai
      @matthallettai 22 วันที่ผ่านมา

      There are many tutorials how to train a LoRA. I use OneTrainer, but you can use whatever is new. You can start with CivitAI, they have a really easy online method for training SDXL and FLUX models. Start with something simple, don't try using a 1000 images like I have, it will melt your brain and GPU.

  • @guilhermepinheiro4433
    @guilhermepinheiro4433 29 วันที่ผ่านมา

    Hey Matt, I'm loving your content on AI for archviz! Did you manage to create good looking animations/videos from images yet? I've tried a feel softwares like kling ai and runway but i'm not convinced there are any ready yet. Thanks!

    • @matthallettai
      @matthallettai 26 วันที่ผ่านมา

      Thanks man! Appreciate that.. Right now the best image to video is Runway, Kling and Minimax. Which are all online services. For local Cog is very popular, LTX just came out, and its a year old now, but Stable Video Diffusion. If you don't like the online generators, you won't like the local stuff! There's not frame to frame methods as of now that I know of.

  • @mohadeseasgari4409
    @mohadeseasgari4409 29 วันที่ผ่านมา

    Hi , I have a question, I can't uninstall package from stability matrix, l don't know why , can you help me

    • @matthallettai
      @matthallettai 26 วันที่ผ่านมา

      You can delete the entire program from \Data\Packages\ That will work.

  • @IDanielMoralesI
    @IDanielMoralesI หลายเดือนก่อน

    Not working on my AMD GPU

    • @matthallettai
      @matthallettai 26 วันที่ผ่านมา

      There are a few solutions for AMD GPUs. Each package has a command line you need to add.. Stable Diffusion and many other AI architecture has been built for us with the CUDA platform developed by Nvidia. AMD GPUS are great affordable gaming cards, I use to only buy AMD GPUs.. Still only buy AMD CPUs. Its too bad they're not useful for the AI tech that's come out in the past few years. Thats why Nvidia stock went through the roof btw.

  • @monah62rus
    @monah62rus หลายเดือนก่อน

    It's strange, but when I try to set up the masks like yours, everything turns black. Why does this happen?

    • @matthallettai
      @matthallettai 26 วันที่ผ่านมา

      Oh there could be so many reasons. Check the logs in the Comfy Tyflow dialog box. Can't remember were that is at the moment. Make sure it can "see" the depth of the model and your resolution,.. model checkpoint. Also try installing Comfy or A1111 on its own and test your Stable Diffusion workflow outside of Max.

  • @ibdesign8207
    @ibdesign8207 หลายเดือนก่อน

    thank you so we canjust generate until now .. not to edite on the final render image that we have .. ?

    • @matthallettai
      @matthallettai หลายเดือนก่อน

      Not sure what you mean, but I have other video and there are other videos on TH-cam about using SD to edit your renderings in post

    • @ibdesign8207
      @ibdesign8207 หลายเดือนก่อน

      @@matthallettai i mean that can i render a photo and put it in ty diffusion and make it to generate just grass only for example

    • @matthallettai
      @matthallettai หลายเดือนก่อน

      @@ibdesign8207 I would only use TyDiffusion for 3D models you want to quick turn into an AI image. What you're talking about is inpainting. Its best to take that image outside of 3ds max and use dedicated tools for AI photo editing like Forge, Comfy, Fooocus, Invoke. etc.

  • @hoangtho5225
    @hoangtho5225 หลายเดือนก่อน

    Thank you for this tutorial!!!

    • @matthallettai
      @matthallettai หลายเดือนก่อน

      Glad you found it useful!

  • @kirill747
    @kirill747 หลายเดือนก่อน

    good video thank

    • @matthallettai
      @matthallettai หลายเดือนก่อน

      So nice of you

  • @AlexBoiko-u5o
    @AlexBoiko-u5o หลายเดือนก่อน

    Thanks a lot for the video. I get an error when I try to upscale this way: "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)" Do you know what it means and how to fix it?

    • @matthallettai
      @matthallettai หลายเดือนก่อน

      That error means your controlnets are mismatched. You have to us SDXL controlnets with SDXL checkpoints,

  • @Burhantub
    @Burhantub หลายเดือนก่อน

    Awesome Trick

  • @julayen
    @julayen 2 หลายเดือนก่อน

    Thanks for this instructive tuto👍 Do you think its possible to customize the path for launching comfui from tydiffusion. I have already comfyui installed with lora, checkpoint.. ect Its anoying for me because i have no more freespace on my C:... so i can t use it properly

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Just found something I think works, I'll post it on the Facebook group so I can include visuals. Assuming you're there as well. If not execute the 'run_ComfyUI.bat' in the root directory of the Tyflow install. In the CMD window look for the host copy and paste it to your web browser. it looks like this... http. 127 0 0 1 : 8188 but I can't paste it into the comments.

  • @AléssioSilva-u6q
    @AléssioSilva-u6q 2 หลายเดือนก่อน

    Always whit more amazing content!! and for free!!! thank you so much Matt!! 👏👏👏👏

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Thanks Alessio! My production quality could be greatly improved, but I do my own editing, and by the time I get to that stage, I'm over it :)

  • @Objektiv_J
    @Objektiv_J 2 หลายเดือนก่อน

    Seems like if the objects' wire color could drive the segmentation controlnet, it could be quite a bit more accurate, I could imagine. Thoughts?

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      I've tried that using before in A1111 using an output ""object color" element like you're describing and removing the preprocessor in controlnet with the Segment model. It kind of works, this was last year and the model was shit at straight clean lines and small detail, so in a case like this were the subjects are far away it won't help. I was testing it making a kitchen and controlling single elements like floors and cabinets with a segment mask outputed from Max. I've always come back to using some type of base image with the colors I want in img2img and adjusting the denoise like I did here. But things change so quickly and often it comes down to the quality of the controlnet model. When flux has a segment model I'll try it again, Flux is amazing at straight and clean lines.

  • @Proart3dviz
    @Proart3dviz 2 หลายเดือนก่อน

    Very nice, thanks)

    • @matthallettai
      @matthallettai หลายเดือนก่อน

      Thanks! Glad you liked it.

  • @hayriistay
    @hayriistay 2 หลายเดือนก่อน

    It is possible that the algorithm generates the depth data. If we can choose Color ID or Object ID, we can change the desired area in the artificial intelligence in the future.

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      I think you're talking about controlnet? Instead of generating a Zdepth pass, Stable Diffusion can make one with a preprocessor and a Depth model. Its actually the most common method when using AI since most users don't start from a rendering program. Just turn controlnet on, selected Depth and the defaults should work. You need a SDXL controlnet for SDXL checkpoints. Thats the quick youtube comment reply, it can get a lot more complicated.

  • @jakeburchill1805
    @jakeburchill1805 2 หลายเดือนก่อน

    Hey Matt, thanks for the video! I was wondering if there are ways of producing images from multiple reference images to generate ideas for projects? Would love help on this or a video would be great on it!! Thanks Jake :)

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Yes, you can make a batch filled with images and use a generic prompt and a denoise like 0.55. There's a batch node in Comfy somewhere, but last time I tried to find it, I could figure out which one. I mostly use Forge, its batch generation is dead simple. If you can find my email address email me and send a message,maybe we can figure it out together. It also sounds like you could be asking about IPAdapter, which is another beast.

  • @roobal94
    @roobal94 2 หลายเดือนก่อน

    HOW CAN I UPLOAD PHOTO FOR A REFERENCE OR IMAGE INPUT?

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Depends on your app! This is just Matrix, its an app manager for image generators.

  • @오국환-x6l
    @오국환-x6l 2 หลายเดือนก่อน

    how to keep the viewport map ch 2?

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      It works like any other channel, just click the "Show Shaded Material in Viewport" make sure you have your bitmap and your UVW set to 2.

  • @whiteghost8244
    @whiteghost8244 2 หลายเดือนก่อน

    First time to your TH-cam channel but I just like and subscribe b/c it's a quality content. Thanks for the useful video especially it's 3ds max and AI.

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Thanks man! I appreciate the comment.

  • @jansrnka2363
    @jansrnka2363 2 หลายเดือนก่อน

    Hi Matt, with your latest knowledge do you think there is a way to use the original scanning photos to teach for swaps on the identical 3d characters? To me that seems the cleanest way how to go about this issue...

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Yes, we can do that now. Remember we tried last year with your 3D characters. There's easier tools to achieve this now without a Lora for each character.

  • @HANKUS
    @HANKUS 2 หลายเดือนก่อน

    the "download resources" button does not seem to work for me on the site, it just opens a new tab and reload the current page. I've tried 2 different browsers as well

    • @matthallettai
      @matthallettai 2 หลายเดือนก่อน

      Only works for me with Chrome, and its because I have all popups blockers disabled. Try the regular Adetailer, not the Udetailer, I think its better now.

  • @christiandebney1989
    @christiandebney1989 3 หลายเดือนก่อน

    theres alot of stolen frames from private ryan in there.

    • @matthallettai
      @matthallettai 3 หลายเดือนก่อน

      As I mentioned on facebook they're inspired frames. I used img2img to create a few frames because I couldn't get flux to create anything like what I wanted with just text to image. I was more interested in seeing how those images would animate with Runway.

  • @resetmatrix
    @resetmatrix 3 หลายเดือนก่อน

    thanks, very interesting using ZDepth as as "guide" to new renders

    • @matthallettai
      @matthallettai 3 หลายเดือนก่อน

      Glad it was helpful!

  • @ЕкатеринаИльясова-т3ч
    @ЕкатеринаИльясова-т3ч 3 หลายเดือนก่อน

    Thank you for the tutorial. Where can I find the model for ControlNet(control_v11fie_sd15_tile)? My dropdown list is empty, and I can't find this model anywhere.

    • @matthallettai
      @matthallettai 3 หลายเดือนก่อน

      You can download models required on Hugginface or Civit. Then you have to place them in the appropriate download directory. If you use a Local AI manager like SD Matrix, it has a download menu you can use for automatic placement. Best to start with a startup video for Forge or Automatic1111

  • @needse1
    @needse1 3 หลายเดือนก่อน

    thanks

    • @matthallettai
      @matthallettai 26 วันที่ผ่านมา

      You're welcome!

  • @popeopera
    @popeopera 3 หลายเดือนก่อน

    I keep getting "Package Modification Failed" while installing packages....any ideas?? (pip install failed with code2)

    • @matthallettai
      @matthallettai 3 หลายเดือนก่อน

      Its a late reply, hopefully you have it figured out. If all your packages get that error then its a problem with Matrix or Windows security perhaps. IF its just one package, that happens during some updates, there's usually a fix posted quickly on Github. Or you have to revert back to a previous version.

  • @pirlibibi
    @pirlibibi 3 หลายเดือนก่อน

    Gold tutorial !

  • @davida5136
    @davida5136 3 หลายเดือนก่อน

    DOPE

  • @davida5136
    @davida5136 3 หลายเดือนก่อน

    AWESOME

  • @davida5136
    @davida5136 3 หลายเดือนก่อน

    SWWEET

  • @davida5136
    @davida5136 3 หลายเดือนก่อน

    Awesome!

  • @davida5136
    @davida5136 3 หลายเดือนก่อน

    Great!

  • @helix8847
    @helix8847 4 หลายเดือนก่อน

    Issue is all I see are the movies that it has taken its inspiration from, That most likely would be because of the limited data.

    • @matthallettai
      @matthallettai 3 หลายเดือนก่อน

      I extracted frames from some movies and built a Flux LoRA, the base model of flux has very limited WWII data.

  • @johnny5132
    @johnny5132 4 หลายเดือนก่อน

    unbelievable!

  • @chove93
    @chove93 4 หลายเดือนก่อน

    thx you

  • @L30nHbl
    @L30nHbl 4 หลายเดือนก่อน

    you are genius! thanks alot!

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      Thanks but its Tyson Ibele whos' the genius!

  • @SouvikKarmakar1
    @SouvikKarmakar1 4 หลายเดือนก่อน

    Great video , thanks , Wish blender has this addon.

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      There are AI addons for Blender, but I dont know if it has this cool unwrap feature.

  • @smukkegreen
    @smukkegreen 4 หลายเดือนก่อน

    Great video. Gotta try this.

  • @lawebley
    @lawebley 4 หลายเดือนก่อน

    Brilliant stuff!

  • @RyanDaily
    @RyanDaily 4 หลายเดือนก่อน

    When installing stability matrix, can you share data with tydiffusion for the sake of drive space?

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      Yes and no. In TyDiffusion you can share a few model directories, but not the main Comfy. The first version of TyDiff had issues with installation and kept having to reinstall so I kept it away from Matrix. TyDiff has very limited model use, so if you have 20GB, I would keep it all separate from now and just use a few checkpoints TyDiff.

  • @phunkaeg
    @phunkaeg 4 หลายเดือนก่อน

    Very Awesome Matt!

  • @pasindumadulupura8462
    @pasindumadulupura8462 4 หลายเดือนก่อน

    after many vdos Im able to run CN in comfy without errors. Kudos 2u man.

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      Thanks for the comment. Glad I could help.

  • @3Dsnapper
    @3Dsnapper 4 หลายเดือนก่อน

    So I've recently tried shifting from Corona to D5 because of it's AI features. They were awesome but still had some limitations - this is exactly what I was hoping for to try to make up for that. I hope this turns out well 😅Thank you for this tutorial !

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      Thanks for the comment. If your using 3ds max check out TyDiffussion. SD right inside the max window.

    • @3Dsnapper
      @3Dsnapper 4 หลายเดือนก่อน

      @@matthallettai I've left Max though - Too expensive. Haha. Is there a version / anything similar in Blender ?

  • @TheVertigo2
    @TheVertigo2 4 หลายเดือนก่อน

    Що казати, Fooocus вже має навіть стиль 'Pa Art Ukrainian Folk Art'. І це круто!

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      Slava Ukraini!!

  • @R1PPA-C
    @R1PPA-C 4 หลายเดือนก่อน

    Have you worked with the animation side of things yet ? I'm struggling to get the animations to come out like the single images are...the results aren't wildly diffeent but almost like it's using a different model... Also how do you have it setup so that you can see the image as it's generating? mine just goes through the whole process then outputs the final image, I mainly want to see what's happening as the anim is processing as currently I have to wait for the whole sequence to be finalised before I see what the result will look like, thanks :)

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      You're always going to have that weird morphing effect with frame by frame SD animation. No matter what tricks you try there no frame is 100% the same as the last. At least currently. I'm sureone out there is working on it. Ai Video you see now is made with video trained models. What we need is a hybrid or controlnet designed for frame by frame denoising img to img. The current tech is animatediff, deform. - see example on this channel. Personally I like SVD but that has little control.

    • @R1PPA-C
      @R1PPA-C 4 หลายเดือนก่อน

      @@matthallettai well the issue I'm having is not the difference in frames but the initial outcome is completely different when doing a single frame with the same settings as when I hit animation. I said not wildly different but sometimes they are... I train a model to be something which I want for each frame but when I go to animate it's like I've used completely different prompts.. I'm lost

    • @matthallett4126
      @matthallett4126 4 หลายเดือนก่อน

      ​@R1PPA-C Depending on the complexity of your scene the more interpolation the AI does with what it "sees" the examples you've seen of other animations look smooth because of their simplicity in size and materials. Leaves and grass for example with change dramatically between frames no matter what you do. Small details change so much it's not worth it. Trust me it's not you.

  • @ChrisCenters
    @ChrisCenters 5 หลายเดือนก่อน

    When I first started working in post production, the buzz word of the day was "morph". All my clients would come to me asking for this thing go from this to that, and I would ask, "Okay, how are you seeing the transition from this to that?" They'd all say, "I don't know, maybe it morphs?" not knowing what that means or how it could be done. I wish I had this video to show them back then.

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      That's exactly how I describe it to. Morphing. We can't escape it yet.

  • @omer133
    @omer133 5 หลายเดือนก่อน

    Thank you for the video. What stable diffusion models can you recommend, specifically for interior design and architecture separately?

    • @matthallettai
      @matthallettai 4 หลายเดือนก่อน

      Don't bother with any model that claims it's good for it, interiors or architecture. Unless it's a Lora addon to experiment with adding certain looks. My favored checkpoints right now are AlbedoXl 2.1 for exteriors. NightVision. EpicPhotogasm. Real Vision XL some others and spelling is off...I'm away from my PC. Best to download popular XL models that are for photorealism. Portrait examples are OK. And compare them with the XYZ plot script at the bottom of A1111 or Forge. Makes a handy grid for you to compare.

  • @LudvikKoutnyArt
    @LudvikKoutnyArt 5 หลายเดือนก่อน

    I believe the technical term for an AI enthusiast is a "proompter" :)

    • @AB-wf8ek
      @AB-wf8ek 5 หลายเดือนก่อน

      Not true. Although language is an integral part, with complex node based processes, it's only a fraction of it.