To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/cuivlazygeek . You’ll also get 20% off an annual premium subscription. My Patreon: www.patreon.com/cuivlazygeek My Merch Store: cuiv.myspreadshop.com/ Nico Carver/Nebula Photos Channel: www.youtube.com/ @NebulaPhotos Ultimate PixInsight Guide: th-cam.com/video/XCotRiUIWtg/w-d-xo.html BatchFITSKeywordEdit repo link: www.cosmicphotons.com/pi-scripts/bfke/ For any affiliate link purchases, make sure to accept cookies if prompted! Also, Honey browser plugin may steal commissions from creators if you interact with them (even just clicking on "OK" to the "No coupons found" message)! Amazon affiliate: amzn.to/49XTx01 Agena affiliate: bit.ly/3Om0hNG High Point Scientific affiliate: bit.ly/3lReu8R First Light Optics affiliate: tinyurl.com/yxd2jkr2 All-Star Telescope affiliate: bit.ly/3SCgVbV Astroshop eu Affiliate: tinyurl.com/2vafkax8
Excellent video as always Cuiv. This is a topic I've been looking into recently, specifically how close merging stacked images would be compared to stacking the individual subframes from each source. Gut feeling say that stacking the individual subframes would be better but this isn't always possible (especially on very large collaborative images).
Thanks Pete! And yeah to be honest, from experience it seems to be that just merging stacks doesn't show much difference from stacking "correctly" in the first place... But it hurts me to not stack correctly! :)
Thanks for this amazing video Cuiv, in the Autumn I captured M31 with a 533mc sensor in one location, and later at home I captured the same object using the 183mm sensor with an Ha filter with the same telescope, the Horizon ED60 and a 0.6 flatner/reducer. I stacked them separately in Pixinsight and realigned them together with the Star Alignment tool so I could continue with the normal post processing to finish the job. Though the sensor dims are completely different between the two cameras, it worked, though there was a lot of field rotation and dynamic cropping afterwards. I liked the preview registration method you showed and would like to know some more about assessing images by the weights method?
I've been combining Ha data taken with my mono camera with colour images taken with my DSLR and Colour Astronomy Cameras in Affinity Photo, basically I've been just adding the Ha data as a luminance layer, which generally gives good results. I do have some macros that allow you to blend most combinations of LRGBSHO data provided that the different channels are correctly stretched and aligned but I haven't really used these, partly because I didn't know how to create separate colour channels from a single colour image in AP, but I have just found a tutorial on how to do it so will probably start playing around with this.
I was watching this because it looked like (and is) good information to have if and when it comes up. It's a cloudy night sort of thing to do. Sometimes you can get little pearls of information unexpectedly. Like: Using 'Manual Filter Wheel' with a Filter Drawer to automatically add the filter meta data to the file. That's a pearl because I use a filter drawer on my travel kit but it never occurred to me . Thanks for all you do.
Great video Cuiv! Joining data from different telesope or different pixel scale can be a real pain. In case I have data with the same telescope but my two different IMX571 cameras (like you have) and I want to stack everything together in one run I do the following. So the steps I take as I have the same combination of IMX571 cameras. 1. Open a file from any of the cameras 2. Open a Image Container 3. Drop all Images from the same camera from which an image got opened into the container 4. Edit properties of the Image Container that no screen messages are displayed (last one in the list to be changed from false to true) 5. Open Crop (not Dynamic Crop!!!) 6. Adjust how many pixels to add/ remove on any of the for sides that the opened image matches the images of the other camera 7. Drag the Triagle of the Crop Process into the bottom bar of the Image Container. 8. Close everything 9. Run WBPP with all the images from the "untouched" images from the other camera plus all size adjusted images that we have just created. Maybe I do a video on that too as it is difficult to name without any supporting visuals.
@CuivTheLazyGeek Yeah! The Process Container part I stole with pride to re-use for other batch operations after I saw your video. 👍🏻 By manipulating the images from one camera all frames can be stacked in one go to make use of the capabilities of WBPP including drizzling. Or you run them through batch processing to use the fast stacking capabilities of SIRIL.👍🏻
Thanks for yet another great video. As I just got the lower priced Askar Duo filter kit, I will be attempting this soon. I have a new filter wheel coming soon, as well...A present for moi! I would like to see if you might, at some point, do some more NINA videos, as I know that it is constantly changing. I use it exclusively now, but still consider myself a semi basic user. I know it does a lot more than I know how to do. You did some great videos a few years ago which were very helpful, but, as there are not a lot of videos about NINA out there, just curious about an updated vid! A fan request! If it was not for your videos on using a Mini Remote PC, my life would be a lot tougher these days!! HAHA. Thanks, and keep looking up!! Mark
Thank you Cuiv for the great video! I will have this problem next year when I get my new rig. I will have 550mm FL and 2563mm FL scopes doing a parallel imaging. I am thinking how I can utilize these very different images and so far, my thinking is that I have to prepare two images and replace one with the other through a mask. Maybe you have another idea, I would appreciate it.
Hello Cuiv ! Thanks for the video! What about staking frames from different FOV/resolution let’s say from a collaboration, you said it could work in WBPP but you also said that if resolution/FOV is too much different it would not work, how to?
Hi Cuiv. Merry Christmas. I used Dynamic Star Align then Dynamic crop to register a few sunflower images taken with different scopes and cameras. Otherwise very similar.
This may be an odd question, but since stars are point sources, I'm assuming all the differences in size are down to overloading the pixel(s) in question bleeding into neighboring pixels, or some type of mount tracking artifact or purposeful dithering. I understand how narrowband filters are helpful with nebulosity, but are there any filters that would actually help optimize for different temperature stars or is the vast majority of starlight in images larger than 1 pixel just smeared data errors? As a broadband target, I don't know if stars should be filtered, but if my question makes any kind of sense, I'm curious if the stars could have cleaner data instead of just fixing them with software.
It's a combination of many things: - optics and their spot diagrams, they can only represent point light sources as a disc (check my video on the Minicat 51 to understand spot diagrams) - diffraction (if optics diffraction-limited) - atmospheric seeing - mount tracking (effectively looks like seeing) - blooming on sensor
Right on time, I was scratching my head how to stack two series together, shot weeks apart and with different exposure lengths and slightly mismatched focal length (zoom lens)
Thanks for your amazing videos! Longtime follower but first comment. :) Would you take arcseconds/pixel into consideration when deciding which image use for detail? For example, I want to do a dual setup but I have the asi2600mc pro and asi492mm pro, which have very different pixel sizes. If I were to use an apertura cabonstar and a askar fra500 I can choose to use the smaller field 492 on the fra and the large 2600 on the CS to get similar fov, but the arcseconds/pixel end up as 1.3 and 1.9, whereas if do it the other way around they end up as 1.66 and 1.55, much closer but would have to mosaic the 492 to cover the 2600 field. Adding to this that it would make sense to have the best detail possible in the monochrome image, it makes sense to have a better arcsec/pixel on the 492 as otherwise I'd be using an 1.9 as/pix img as luminance of a 1.3 as/pix img.
I have a nice 8 Edge HD telescope on a alt/az mount and am about to buy my first camera. All of this has my head spinning but cannot wait to start my astro imaging journey. I am thinking it will make a nice retirement hobby once I retire. Until then I get to learn the basics and practice. Thank you.
Whats your opinion on say combining say uvir with alp-t data usinging the combine images script in the graxpert toolbox and using the blend function? This is how i have been doing it and seem to be having great results. Im guessing its essentially using pixel math
Did you watch the video? He mentions so many times other software and how that works similar - obviously you can only demo it in a meaningful way in one software - but all the steps are similar at least in Siril…maybe you create the corresponding video in Siril? Anyway: i loved it! Great work by @Cuiv as always!
@@DrNat1 yea but usualy you can cheat , in my point of view pixinsight is just a colection of other pice of software packed into one , so you can go on the long route and get same result
In Siril, the default Global registration method (Homography) transforms with 8 degrees of freedom, including x/y shift, rotation, scaling (larger/smaller) and skew. If you have all the same filter and you want to join, start with your (mixed) calibrated lights, create a sequence using the Conversion tab, then proceed with registration and stacking. If you have different filters you would stack them separately, then create a sequence of the stacked images and register to put them onto the same scale.
My main challenge is stitching panels together for mosaics. Despite my best efforts making the backgrounds match seems nigh impossible. I don't have the budget for a shorter focal length scope so bigger targets like Andromeda, Orion, NA nebula, etc have to be captured in multiple panels.
Yeah, mosaics are really hard with the backgrounds... I don't really do mosaics for that reason. If I want a wider FOV I'll switch to my smaller scope :) Otherwise I hear APP can do a great job with mosaics!
@@CuivTheLazyGeek I'll have to give that a try, thanks! I've mainly used DSS, siril, and photoshop so far, and even then I know I'm barely scratching the surface of what these tools can do. I only have one astrograph unfortunately, a 700mm askar refractor, and I haven't yet had a chance to try out a cheaper 1m doublet I picked up recently for free (so many clouds...). My 400mm reflector is not suitable for photography, I don't trust it to hold my cameras in its 3d printed focuser. I definitely want to get more, but it takes time and a lot of money. I can be patient considering I bought my first telescope in march 2024 :)
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/cuivlazygeek . You’ll also get 20% off an annual premium subscription.
My Patreon: www.patreon.com/cuivlazygeek
My Merch Store: cuiv.myspreadshop.com/
Nico Carver/Nebula Photos Channel: www.youtube.com/ @NebulaPhotos
Ultimate PixInsight Guide: th-cam.com/video/XCotRiUIWtg/w-d-xo.html
BatchFITSKeywordEdit repo link: www.cosmicphotons.com/pi-scripts/bfke/
For any affiliate link purchases, make sure to accept cookies if prompted! Also, Honey browser plugin may steal commissions from creators if you interact with them (even just clicking on "OK" to the "No coupons found" message)!
Amazon affiliate: amzn.to/49XTx01
Agena affiliate: bit.ly/3Om0hNG
High Point Scientific affiliate: bit.ly/3lReu8R
First Light Optics affiliate: tinyurl.com/yxd2jkr2
All-Star Telescope affiliate: bit.ly/3SCgVbV
Astroshop eu Affiliate: tinyurl.com/2vafkax8
Hi Cuiv, could you do a video to show us how to do this in Siril? Joyeux Noel!
Seems there's a lot of interest - it's the same logic, but you just need to create sequences of images, that you can register and/or stack
Thanks!
bro I love this channel. Everything posted on here is high quality.
Thanks mate!
First! Astropixel Processor does this beautifully then do the rest as you show in pixinsight. Merry Christmas Cuiv :)
I think the same way. APP shines in usecase like this.
I hear a lot of good things about APP when it comes to alignment and mosaics!
Hi Cuiv,
Just wanted to say thanks for the preview trick! It's awesome
😁
If you use "Add Custom", in Weighted Batch Processing, you can combine different filters etc. to stack.
Excellent video as always Cuiv. This is a topic I've been looking into recently, specifically how close merging stacked images would be compared to stacking the individual subframes from each source. Gut feeling say that stacking the individual subframes would be better but this isn't always possible (especially on very large collaborative images).
Thanks Pete! And yeah to be honest, from experience it seems to be that just merging stacks doesn't show much difference from stacking "correctly" in the first place... But it hurts me to not stack correctly! :)
Thanks for this amazing video Cuiv, in the Autumn I captured M31 with a 533mc sensor in one location, and later at home I captured the same object using the 183mm sensor with an Ha filter with the same telescope, the Horizon ED60 and a 0.6 flatner/reducer. I stacked them separately in Pixinsight and realigned them together with the Star Alignment tool so I could continue with the normal post processing to finish the job. Though the sensor dims are completely different between the two cameras, it worked, though there was a lot of field rotation and dynamic cropping afterwards.
I liked the preview registration method you showed and would like to know some more about assessing images by the weights method?
Dynamic Alignment process and then Image Blend script is another option within PixInsight
I've been combining Ha data taken with my mono camera with colour images taken with my DSLR and Colour Astronomy Cameras in Affinity Photo, basically I've been just adding the Ha data as a luminance layer, which generally gives good results. I do have some macros that allow you to blend most combinations of LRGBSHO data provided that the different channels are correctly stretched and aligned but I haven't really used these, partly because I didn't know how to create separate colour channels from a single colour image in AP, but I have just found a tutorial on how to do it so will probably start playing around with this.
That's great!! I need to know what would be in general the best flow for LRGBSHO images :) and then teach it!
I was watching this because it looked like (and is) good information to have if and when it comes up. It's a cloudy night sort of thing to do. Sometimes you can get little pearls of information unexpectedly. Like: Using 'Manual Filter Wheel' with a Filter Drawer to automatically add the filter meta data to the file. That's a pearl because I use a filter drawer on my travel kit but it never occurred to me . Thanks for all you do.
Oh yes the manual filter wheel is great to use when you have a filter drawer, glad it helped!
Great video Cuiv! Joining data from different telesope or different pixel scale can be a real pain.
In case I have data with the same telescope but my two different IMX571 cameras (like you have) and I want to stack everything together in one run I do the following.
So the steps I take as I have the same combination of IMX571 cameras.
1. Open a file from any of the cameras
2. Open a Image Container
3. Drop all Images from the same camera from which an image got opened into the container
4. Edit properties of the Image Container that no screen messages are displayed (last one in the list to be changed from false to true)
5. Open Crop (not Dynamic Crop!!!)
6. Adjust how many pixels to add/ remove on any of the for sides that the opened image matches the images of the other camera
7. Drag the Triagle of the Crop Process into the bottom bar of the Image Container.
8. Close everything
9. Run WBPP with all the images from the "untouched" images from the other camera plus all size adjusted images that we have just created.
Maybe I do a video on that too as it is difficult to name without any supporting visuals.
Interesting technique! Yes that could work as well, reminds me of the Crop technique I showed a long time ago to speed up stacking :)
@CuivTheLazyGeek Yeah! The Process Container part I stole with pride to re-use for other batch operations after I saw your video. 👍🏻
By manipulating the images from one camera all frames can be stacked in one go to make use of the capabilities of WBPP including drizzling. Or you run them through batch processing to use the fast stacking capabilities of SIRIL.👍🏻
absolutely great! thank you so much!!
Glad you liked it!
Thanks for yet another great video. As I just got the lower priced Askar Duo filter kit, I will be attempting this soon. I have a new filter wheel coming soon, as well...A present for moi! I would like to see if you might, at some point, do some more NINA videos, as I know that it is constantly changing. I use it exclusively now, but still consider myself a semi basic user. I know it does a lot more than I know how to do. You did some great videos a few years ago which were very helpful, but, as there are not a lot of videos about NINA out there, just curious about an updated vid! A fan request! If it was not for your videos on using a Mini Remote PC, my life would be a lot tougher these days!! HAHA. Thanks, and keep looking up!! Mark
Yeah I need to do an update on NINA :)
Thank you Cuiv for the great video! I will have this problem next year when I get my new rig. I will have 550mm FL and 2563mm FL scopes doing a parallel imaging. I am thinking how I can utilize these very different images and so far, my thinking is that I have to prepare two images and replace one with the other through a mask. Maybe you have another idea, I would appreciate it.
Hi Cuiv! Thanks for all you do. Great video. Out of curiosity, what is that book on your desk?
It's called "The Astrophotography Manual" by Greg Woodhouse!
Hello Cuiv !
Thanks for the video! What about staking frames from different FOV/resolution let’s say from a collaboration, you said it could work in WBPP but you also said that if resolution/FOV is too much different it would not work, how to?
Hi Cuiv. Merry Christmas. I used Dynamic Star Align then Dynamic crop to register a few sunflower images taken with different scopes and cameras. Otherwise very similar.
Thanks for sharing!
This may be an odd question, but since stars are point sources, I'm assuming all the differences in size are down to overloading the pixel(s) in question bleeding into neighboring pixels, or some type of mount tracking artifact or purposeful dithering. I understand how narrowband filters are helpful with nebulosity, but are there any filters that would actually help optimize for different temperature stars or is the vast majority of starlight in images larger than 1 pixel just smeared data errors? As a broadband target, I don't know if stars should be filtered, but if my question makes any kind of sense, I'm curious if the stars could have cleaner data instead of just fixing them with software.
It's a combination of many things:
- optics and their spot diagrams, they can only represent point light sources as a disc (check my video on the Minicat 51 to understand spot diagrams)
- diffraction (if optics diffraction-limited)
- atmospheric seeing
- mount tracking (effectively looks like seeing)
- blooming on sensor
Right on time, I was scratching my head how to stack two series together, shot weeks apart and with different exposure lengths and slightly mismatched focal length (zoom lens)
Glad you found this helpful!
Thanks for your amazing videos! Longtime follower but first comment. :) Would you take arcseconds/pixel into consideration when deciding which image use for detail? For example, I want to do a dual setup but I have the asi2600mc pro and asi492mm pro, which have very different pixel sizes. If I were to use an apertura cabonstar and a askar fra500 I can choose to use the smaller field 492 on the fra and the large 2600 on the CS to get similar fov, but the arcseconds/pixel end up as 1.3 and 1.9, whereas if do it the other way around they end up as 1.66 and 1.55, much closer but would have to mosaic the 492 to cover the 2600 field. Adding to this that it would make sense to have the best detail possible in the monochrome image, it makes sense to have a better arcsec/pixel on the 492 as otherwise I'd be using an 1.9 as/pix img as luminance of a 1.3 as/pix img.
I notice "The Astrophotography Manual" book on your desk. Any good?
I have a nice 8 Edge HD telescope on a alt/az mount and am about to buy my first camera. All of this has my head spinning but cannot wait to start my astro imaging journey. I am thinking it will make a nice retirement hobby once I retire. Until then I get to learn the basics and practice. Thank you.
Very cool! Have fun with it! And remember: practice makes perfect 🎉
You're going to have a lot of fun! :)
Whats your opinion on say combining say uvir with alp-t data usinging the combine images script in the graxpert toolbox and using the blend function? This is how i have been doing it and seem to be having great results. Im guessing its essentially using pixel math
It is effectively doing ~(~A*~B) in terms of PixelMath, but if it works for you that's great, no further questions asked!!
@CuivTheLazyGeek thank you. I assumed was effectively the same. Its a good lazy way.
I've got no tips, but I'm taking away a truckload... Thanks, Cuiv! 👍👍
Cheers, glad it's helpful!
Great video, I’ve got the APP version on how to do this on my channel and also be an article in Sky at Night magazine this Feb 😊
Awesome! Congrats on the article!
SETI Astro method using mosaic by coordinates
Sure is useful….thank you…❤
Registar supposedly does this very easily
I had never heard of it! Thanks for pointing it out!
@@CuivTheLazyGeekI tried out the free version and was impressed so I bought it. It can calibrate the image rather accurately before merging.
I'm Not A Hater , but only pixinsight ? what about free software ? Merry Christmas !
Did you watch the video? He mentions so many times other software and how that works similar - obviously you can only demo it in a meaningful way in one software - but all the steps are similar at least in Siril…maybe you create the corresponding video in Siril? Anyway: i loved it! Great work by @Cuiv as always!
It’s always £££ with astrophotography, like a crappy video game it’s pay to win
@@DrNat1 yea but usualy you can cheat , in my point of view pixinsight is just a colection of other pice of software packed into one , so you can go on the long route and get same result
0:28
In Siril, the default Global registration method (Homography) transforms with 8 degrees of freedom, including x/y shift, rotation, scaling (larger/smaller) and skew. If you have all the same filter and you want to join, start with your (mixed) calibrated lights, create a sequence using the Conversion tab, then proceed with registration and stacking. If you have different filters you would stack them separately, then create a sequence of the stacked images and register to put them onto the same scale.
It’s not m31 it’s the Triangulum?
Yes, I misspoke :)
Neat
M33 not M31. Lol
Ha! Yes
My main challenge is stitching panels together for mosaics. Despite my best efforts making the backgrounds match seems nigh impossible. I don't have the budget for a shorter focal length scope so bigger targets like Andromeda, Orion, NA nebula, etc have to be captured in multiple panels.
Yeah, mosaics are really hard with the backgrounds... I don't really do mosaics for that reason. If I want a wider FOV I'll switch to my smaller scope :) Otherwise I hear APP can do a great job with mosaics!
@@CuivTheLazyGeek I'll have to give that a try, thanks! I've mainly used DSS, siril, and photoshop so far, and even then I know I'm barely scratching the surface of what these tools can do.
I only have one astrograph unfortunately, a 700mm askar refractor, and I haven't yet had a chance to try out a cheaper 1m doublet I picked up recently for free (so many clouds...). My 400mm reflector is not suitable for photography, I don't trust it to hold my cameras in its 3d printed focuser. I definitely want to get more, but it takes time and a lot of money. I can be patient considering I bought my first telescope in march 2024 :)
ASTAP does a reasonable job of mosaics.