Image Control Through Prompting: Mastering Camera Perspectives | Alchemy with Xerophayze

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 เม.ย. 2024
  • Join us in this comprehensive guide, the first in our new series, designed to enhance your skills in image generation. In this episode, we dive deep into the art of crafting effective prompts to control camera perspectives, helping you capture everything from stunning close-ups to expansive distant views. Whether you're a beginner or seasoned artist, you'll learn to harness the power of Stable Diffusion and Automatic 1111 Forge Edition to create visuals that stand out. Don’t forget to check out our links below for more resources and updates!
    🔗 Links:
    XeroGen Lite, online prompt generator: shop.xerophayze.com/xerogenlite
    🛒 Shop Arcane Shadows: shop.xerophayze.com
    🔔 Subscribe to our TH-cam channel: video.xerophayze.com
    🌐 Explore our portfolio: portfolio.xerophayze.com
    💼 Xerophayze Google Share: share.xerophayze.com
  • แนวปฏิบัติและการใช้ชีวิต

ความคิดเห็น • 22

  • @Shabazza84
    @Shabazza84 หลายเดือนก่อน +2

    It's always a good trick to not just tell what the perspective should be, but what the AI should draw that forces it to show what you want to see.
    So if it ignores your "full body" prompt, add a (maybe even reduced weight) prompt for a specific type of belt (-> cowboy shot), or shoes (-> full body).
    Or if you want a closeup, give it [heterochromia::3] or something, so it zooms in to show the eyes early, but does actually not apply the concept.
    (because you destroy the prompt after 1-3 steps)
    I won't advertise another artist here, but I saw a video were this "use concept swapping / concept enforcing to get what you want" stuff was explained.
    And it's a game changer, especially for anime-models with Booru tags. Those usually don't have an idea about certain camera angles or shot compositions.
    But they do know what a belt or heterochromia is. (just to stick with those examples)

  • @dadadies
    @dadadies หลายเดือนก่อน +1

    I suggest doing a random manual prompt and maybe settings generation video where you do a bunch of these generations. This is specifically useful for people who dont have GPUs but instead have CPUs who can generate images but its take forever to generate so it takes forever for them to test much of anything and figure out what does what. For example it takes me 5 min to over hour to generate one image so its hard to test much of anything in a reasonable time. This would probably only benefit people in that situation and help them get an idea of what might do what. Just a suggestion.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน +1

      Yeah actually we could do a follow up video on this where I have a series of prompts that show off the different words or details that could be used to control distance and angle of the camera view. Then just plug those into the XYZ plot extension and I can even spread that out acrossed a few different models as well. Great idea thank you.

  • @demiurgen3407
    @demiurgen3407 หลายเดือนก่อน +2

    Great video! Best way to learn is watching the pro's work. Maybe a video in the future about how to not get anything in the background. Totally neutral backgrounds like green screen etc.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน

      That would be a very short video. But easy to do.

    • @demiurgen3407
      @demiurgen3407 หลายเดือนก่อน

      @@AIchemywithXerophayze-jt1gg Awesome! Because I always get lots of random abstract shapes in the background, especially when I am using Control Net and OpenPose.

  • @GryphonDes
    @GryphonDes หลายเดือนก่อน +1

    Good stuff Eric !

  • @Fanaz10
    @Fanaz10 หลายเดือนก่อน +1

    these results look amazing, what model are you using??
    edit: just noticed its realities edge xl, downloading now 😁

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน

      Yeah that's my go-to for right now. But juggernaut photo XL lightning v9 it's really nice as well. It's the one I was using before I started using reality's edge

  • @waurbenyeger
    @waurbenyeger หลายเดือนก่อน +1

    Haha, don't I feel like a fool. Something so obvious completely went over my head. Thank you for the video and the info, subbed. Also, have you messed with the (free) program Krita with the stable diffusion extension? It's a painting program that's kiiiiind of like Photoshop. You can generate layers, mask things, make hue and saturation adjustments etc all while being able to use your favorite checkpoints, LoRAs etc.
    EDIT: Forgot to mention the coolest part ... you can make very basic drawings and it will generate images based on that (like sketch in a1111). And it will even do it in real time!

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน +1

      Yeah krita is pretty good. I primarily use Gimp, but I did do a video a little while ago about integrating stable diffusion with krita so you could do those basic drawings and have it generate images from them in real time.

    • @Fanaz10
      @Fanaz10 หลายเดือนก่อน

      yea I was thinking about it, generating sketch and give to design a website from that sketch, is that possible?

  • @baheth3elmy16
    @baheth3elmy16 หลายเดือนก่อน +1

    Thanks!! Maybe use a different model?

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน

      I think you're suggesting to use a couple different models during my video? I think that's a good suggestion.

  • @phozel
    @phozel หลายเดือนก่อน +1

    Very nice video thanks but What is the specs of your computer?

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน +1

      10th gen Intel core i9, 128 gig of RAM, NVMe system drive, RTX 3080 TI 12 GB VRAM.

    • @phozel
      @phozel หลายเดือนก่อน

      @@AIchemywithXerophayze-jt1gg thank you.

  • @therookiesplaybook
    @therookiesplaybook หลายเดือนก่อน +1

    How to get cinematic looking images like Midjourney creates. The majority of images SD creates of people look like corporate stock images.

    • @AIchemywithXerophayze-jt1gg
      @AIchemywithXerophayze-jt1gg  หลายเดือนก่อน

      Each has their strengths. The problem I have with midjourney is everything generated has that "midjourney" look. Almost always too stylized even when your trying to achieve photorealism.

    • @therookiesplaybook
      @therookiesplaybook หลายเดือนก่อน

      @@AIchemywithXerophayze-jt1gg For sure. I get that argument too. I would like to find a balance of both. Realism of SD with the cinematography of Midjourney.

    • @dthSinthoras
      @dthSinthoras หลายเดือนก่อน +1

      Low CFG helps, also choose a more stylized model.
      Or advanced: Train a LORA with that MJ-Look or IP-Adapter that.