I love playing around with the colours of my images. I have so few clear nights at the moment that I spend most of my time reprocessing old data, if nothing else I'm advancing my processing skills. I even colourize my Ha images taken with my mono camera, as you said Cuiv, if you could see these nebulae with your eyes they would appear red anyway. I don't use PixInsight but still found this video interesting and entertaining.
The timing of this video is amazing. I was pulling my hair out last night processing Orion. You gave me some great ideas. I can't wait to try this! Thank you!
Hi Cuiv, I was mucking around with a dual refractor setup for the first time over the weekend. It's got OSC on one scope, mono on the other. I shot orion as a test target subject, whereby RGB and Ha were captured. Having watched your vid with the HDRMultiTransform process, I often use a LUM layer or Ha layer, perform the HDRMT on that layer, and then use the LRGB recombination process to (only with the Ha/LUM active) to recombine onto the RGB image. The resultant is the overexposed section of the core of orion tends to disappear without having to blend images back into pixel math.
Yeah, I kind of sort of don't care about what is expected for SHO palette (which for most targets would be a sea of green anyway) but I still hope it helps people!
Cuiv you always make things look so easy, and lazy ;-) These duo dual narrow band filters are great when mono results with OSC ease 🙂 Thanks for the video and enjoy those D1 and D2 filters before I ask for them back 🙂
I think what is missing from this video is the association of the weighting factors used in the pixel math with the QE value from the camera specs. For example if using the ASI2600MC Pro, OIII @ 500nm will result in a weighting of 0.5 Blue, 0.95 Green, and 0.05 Red. The values can be adjusted from there as required for aesthetics.
Interesting video. I’ve been using the Askar D1+D2 filter set for about a year. During that time I’ve been experimenting with a variety of PI processes. This video shows a couple of new approaches that can be used to generate eye catching images. Looking forward to your review of the ColourMagic D1+D2 filter set.
Nice video! Thanks, Cuiv. As the video started, I wrote down my question about why the OIII looks different in each filter, but you addressed that further along. But I'm still wondering why if the OIII and Ha/SII emission lines are so far apart, why is do the sensor sites have so much overlap? The differences in the G/B response curves (I used an ASI2600MC response curve) seem very, very small between 672nm and 657nm, I wouldn't think there would be so much of a color difference in the blue-green. Perhaps those small differences are perceived more acutely than one would think just from the numbers on the response curve?
Hello Cuiv, I’ve been following you for a long time but this video motivated me to actually buy the product. I was looking for a set of filters that did exactly what these do and was concerned about cost…saw your vid…bought the set! Thanks so much for the recommendation. Cheers, Simon PS: I notice other commenters here asking you to do a test on the new Sharpstar 50ED. Please do, as I just bought one for wide field use but haven’t had the weather to be able to use it and would love to see it tested.
Hi Cuiv, great video as ever, thank you. I have an Altair Astro Tri RGB filter for my 294c which caputures Ha, Oiii, Sii and Nii all in one go. How would I extract the Ha, Oiii, Sii from the one image in PixInsight? Thank you.
Very good video as usual, Cuiv!!! 👍👍One doubt: if you make the OIII channel as 0.5 of one filter and 0.5 of the other, aren't you dropping half of the signal? I mean if you have one hour exposure on one filter and another one on the other, you should sum channel, not taking 50% of them... or am I wrong? 🤔
This is effectively the sum vs average debate - if your system has enough precision, you can interchangeably go from one to the other. In one scenario you have 0.5*OIII1 + 0.5*OIII2 = 0.5*(OIII1 + OIII2), in the other you have OIII1 + OIII2. You can go at will between the two by multiplying or dividing by 0.5. So one is a scaled version of the other, e.g. they are essentially the same! However, if you have some bright stars in both images, it's possible the simple sum of those bright pixels would overflow, and thus saturate the pixels, thus losing information. In that case, the equivalency is no longer true! So in an ideal world, the sum and the average contain the same information, can go from one to another at will. But in the real world we need to be careful, and direct sum can cause loss of data!
Absolutely nice video...as usual. Cuiv, you make being lazy, easy...and that is worth a lot!!! I'm a OSC guy wanting to migrate to mono...but not ready to shell out the money for >another< camera and a filter wheel. Would you recommend the "C set" or the "D set"? If the "D-set" would get me really close to transitioning to full mono, I might go with that and pause on full-mono...on the other hand, if full-mono totally outweighs even the D-set, then maybe I'd go with the C-set as a bridge towards full-mono. Whaddya say...?
Thanks for that processing video, good to know.i just bought the Askar D1 and D2 filter, so anxious to see your review with those. Did a few imaging sessions with the D1 and so far there're great. A big difference from my zwo duoband filter. Just a question though, do you take the same amount of data with both filter or one less than the other and if so witch one. Thanks
Hi Cuiv, great video as always.. and with many more learning on how to process the image. need a bit of help: I'm using D1 and D2 filters. my stacked D1 and D2 images are not aligned (I suspect different exposure time + deietering). so when I combine the two Oiii I get a "double-vision" image. so my problem is in preparation and integration of the lights coming from the 2 filters. how do I align the images from the 2 filters to match ?
Awesome video, as usual! I'm watching these even without being able to use filters with my rig 😂. I have one question: wouldn't it make sense to keep all images linear up until the pixel math recombination? My uneducated guess is that it would provide more headroom for the creative part of the process. Does it make sense? Thanks again for your nerdiness and passion, it's truly inspiring 🙏
You're absolutely right, you could do Linear Fit followed by Pixel Math in linear stage - but for whatever reason I then have more trouble processing the images... But that's probably just me!
Wanted to fact check one detail: in the beginning you mention the green tint is because the osc camera has twice as many green pixels, and while you’re later correct that the green channel has better signal than blue because of that, the intensity difference is not. In fact it’s a combination of the light pollution profile and the green pixels’ sensitivity. When you demosaic the image, the average adu values don’t change depending on if there’s 1/2 or 1/4 pixels. Have you been using cfa drizzle for the final integration? I’m shooting with similar setup and that gives a distinctively sharper result in the pixel scale especially when combined with bxt.
Another great video Cuiv, merci ! I was wondering about the OIII combination - I have the feeling that averaging the 2 OIII (and even de blue and green) those not really take advantage of gathering twice the OIII data as you don't double your integration or SNR with that technique. Did you give any though on different combination type such as making a sum of the two or a max? Also, why don't jtst use the green channel if it has a better SNR ? Mixing it with the blue look like we're just degrading it or I am perhaps missing something? These are questions I have myself and would love to hear you opinion on this topic:) Thanks!
These are great questions, and I don't really have an answer! Using the Max of the two or just the green channel is definitely a good idea, although the Blue channel is less affected by Ha filtering through... it's really to taste!
Hey professor Cuiv thank you for the tutorial, hope i can get a clear night soon to try it out. One question, do you need to prestretch them before doing the lrgb recombination? Or could you combine them linear and then use GHS for the combined?
The official recommendation from PI is to stretch prior to LRGB combination - but there are other ways such as using PixelMath instead, prior to stretching!
I don’t know if you reply to old videos, but I do have a question. I purchased D1 and D2 filters some months ago, but finally was able to get out on 1 July and then not until 24 August to image with them (and you thought you had bad weather in Tokyo!). I use the D1 in July and D the D2 filter in August. So, am I going to need to these images before I process? I’m sure that they’re fairly close to being aligned, but not perfectly. Thanks!
Thank you for doing this and I can't wait to see your comparison with the more expensive D1 & D2. I'll be really disappointed if I could have gotten away with the much cheaper C series. 😄
Thanks for the excellent tutorial Cuiv - consistently great content from you, I've learnt so much over the last year or so. I've a D1/D2 combo BNIB here so will be trying them out with this workflow if the clouds ever leave the Highlands of Scotland... Do you see any role for GHS in your workflow?
Another question...what's the difference in separating app channels and merging them back together in LRGB color combo, vs going straight to narrowband normalization from the beginning? Is the outcome much different?
How necessary is PixInsight to getting these results? Is the advantage over something like Siril just ease or convenience? Or can you simply not get the same results?
@cuivlazygeek - How does one handle differences in field rotation between the two image captures using the Ha/OIII and SII/OIII filters? I am looking to combine a new SII/OIII image with an old Ha/OIII image and when conducting the pixel math to merge the two OIII images I discovered that the field is rotated slightly between the two images, yielding a spiral looking result. Is there an easy way within PI to force the orientation of the original two images to be identical? Note that I first registered and stacked using DSS for each image separately. Thanks in advance.
I already have the dual band filter (L-eXtreme) from Optolong. They do not seem to make the corresponding OiiiSii… Looking at your processing, the Oiii2 seems a bit redundant. A OiiiHa + Sii should get a similar result in the end. Did you try this?
What would happen if you ran HDRM on the individual channels before combining? Bad idea? I’m definitely getting the C1/C2 or D1/D2 filters. Just how much $$ do I want to part with??
And, do you see an issue with mixing filter manufacturers? I have an Optolong L-eXtreme and thinking about buying a Askar Oiii/Sii (D2, I believe). Thanks!
Hello Cuiv, Thank you for sharing all your knowledge. I am running into a problem, and I hope you can help. I took 80 pics (5 minutes) with the D1 and 80 pics (5 minutes) with the D2 filters. The D1 was first over many nights, and then the D2 over many nights. I have a framing error (I think) because when I try to add (OIII1*0.5+OIII2*0.5), the stars do not line up, yielding an image with double starts. I used integration, which worked, but my issue now is that when trying to use the LRGB combination, the image I created (oiii) does not appear on the list. I hope you have a simple solution. Thank you for your excellent support.
Yeah the C1 and C2 filters are decent for that since they have such wide bandpasses, and the D1 and D2 filters I'm currently testing should also perform well!
👍👍!! Hmm, the bigger part of the word astrophotography is 'photography', Which is art, not just art though, also documentation data for later analysis.... Which came first, the chicken or the egg? 😜
Can the C1/2 filters be used with fast optics? We have terrible weather here in the UK so im trying to put together a system to capture as much data in the few nights a month we get of clear weather.
Am I in a dream or has the Mike Cranfield / EDIT Adam Block ImageBlend script, just copied the photoshop 'blend mode' function wholesale? 😆This includes the daft names which bare no relation to what they do! I'm not complaining; I thought I was the one and only keeper of the secret that the photoshop 'select all' >'copy', 'paste' >'select blend mode' Done! is hands down, the best, fastest and most accurate way to put stars back into an image; certainly better than pixelmath which appears to me to bleach the 'Starless' image. Anyway, I'm very happy now, that I no longer have to load up photoshop to do this!
@@CuivTheLazyGeek I think you're right. Someone else pointed this out too. I guess Adobe don't have any ownership of those terms, which I recognise as being based on film development terms. I just thought they were being naughty in the layout and presentation of the tool, looking somewhat familiar.
@@CuivTheLazyGeek maybe I'm deluding myself, but I am also lazy and always looking for shortcuts.. so I was wondering what would happen if we used this merging script to simply merge the 2 outputs from the stack. I.e. the HAOIII and the SIIOIII, WITHOUT any further stages such as separating out RGB, etc. If this script works well, would a simple addition of the two frames not also result in 'double' OIII? Or would this also double noise?
You make such good content in this hobby, you should have more subs. With no offense, I recommend maybe changing the channel name to 'Cuiv, The Lazy Astrophotographer' or 'Astronomer.' Basically, any name with 'astro' in it.
Very useful video, but personally I was hoping for a true Hubble palette colour, so I could use the technique on my own data, the colours you ended up with are not to pleasing to my eye….sorry…
@@paulv5924 yes I realise that, but when Cuiv used the narrowband normalisation script and hit the SHO option, the colour was way off, and not what I expected at all even with the good data he had, it should have been much closer to SHO palette than it was TBH, I was quite supprised…
Long waiting for this one from you. Love your content!
I love playing around with the colours of my images. I have so few clear nights at the moment that I spend most of my time reprocessing old data, if nothing else I'm advancing my processing skills. I even colourize my Ha images taken with my mono camera, as you said Cuiv, if you could see these nebulae with your eyes they would appear red anyway. I don't use PixInsight but still found this video interesting and entertaining.
Playing around with the colors is so much fun! I'm glad you enjoyed the video despite not using PixInsight!
Science Art. What a boring world if we couldn't interpret these wonders. That's what I love about the universe 👍
Exactly!!!
I had never noticed the face in the nebula before. It's @27:03 behind "TO".
Oh wow, cool!! I hadn't noticed either!
The timing of this video is amazing. I was pulling my hair out last night processing Orion. You gave me some great ideas.
I can't wait to try this! Thank you!
That's awesome! Glad I could help!
Another great video. It is a hobby. It is supposed to be fun and you make it so. Thanks!
Hi Cuiv, I was mucking around with a dual refractor setup for the first time over the weekend. It's got OSC on one scope, mono on the other. I shot orion as a test target subject, whereby RGB and Ha were captured. Having watched your vid with the HDRMultiTransform process, I often use a LUM layer or Ha layer, perform the HDRMT on that layer, and then use the LRGB recombination process to (only with the Ha/LUM active) to recombine onto the RGB image. The resultant is the overexposed section of the core of orion tends to disappear without having to blend images back into pixel math.
Thanks Mark!!
Great video. Can you make the same tutorial processing in Siril ? Thank you.
😂
I wish! Unfortunately my Siril-fu isn't advanced enough!
Thanks for sharing your processing approach. Very interesting !
My pleasure!
Despite the deviation from what is expected in SHO hubble palette, this could be one of your top3 videos for its utility. Great great job here
Yeah, I kind of sort of don't care about what is expected for SHO palette (which for most targets would be a sea of green anyway) but I still hope it helps people!
I just got my Omegon 571 camera, and I will try to follow your process. Thanks again for your videos Cuiv
Astrophotography image processing has come a loooooooooooooong way. Cool. Thanks. ML and scripts. Lots of automation.
Yesss! Things are much easier now!
Cuiv you always make things look so easy, and lazy ;-) These duo dual narrow band filters are great when mono results with OSC ease 🙂 Thanks for the video and enjoy those D1 and D2 filters before I ask for them back 🙂
I'm kinda hoping you forget about those filters Dave lol
I think what is missing from this video is the association of the weighting factors used in the pixel math with the QE value from the camera specs. For example if using the ASI2600MC Pro, OIII @ 500nm will result in a weighting of 0.5 Blue, 0.95 Green, and 0.05 Red. The values can be adjusted from there as required for aesthetics.
Interesting - how do you calculate it?
That is awesome! I‘ll try it out tomorrow!
Have fun!
Interesting video. I’ve been using the Askar D1+D2 filter set for about a year. During that time I’ve been experimenting with a variety of PI processes. This video shows a couple of new approaches that can be used to generate eye catching images. Looking forward to your review of the ColourMagic D1+D2 filter set.
Have fun with this approach!
Cuiv, thank you for this tutorial,I have a OSC camera too,and I’ll try this later.
Enjoy!
Excellent video. Just got a Sulphur filter with zwo 533 mc pro. Can't wait to try this process.
Have fun!
Very clear explications, another reason to buy pixinsight
Thank you cuivre
Christian
Glad it was helpful!
Looking forwood to the D1/D2 filters.
Hi Cuiv a great educational video about processing mate Walt Disney would be proud of your image 😂
Ha! For sure lol
Nice video! Thanks, Cuiv. As the video started, I wrote down my question about why the OIII looks different in each filter, but you addressed that further along. But I'm still wondering why if the OIII and Ha/SII emission lines are so far apart, why is do the sensor sites have so much overlap? The differences in the G/B response curves (I used an ASI2600MC response curve) seem very, very small between 672nm and 657nm, I wouldn't think there would be so much of a color difference in the blue-green. Perhaps those small differences are perceived more acutely than one would think just from the numbers on the response curve?
Hello Cuiv,
I’ve been following you for a long time but this video motivated me to actually buy the product. I was looking for a set of filters that did exactly what these do and was concerned about cost…saw your vid…bought the set! Thanks so much for the recommendation.
Cheers,
Simon
PS: I notice other commenters here asking you to do a test on the new Sharpstar 50ED. Please do, as I just bought one for wide field use but haven’t had the weather to be able to use it and would love to see it tested.
Hi Cuiv, great video as ever, thank you. I have an Altair Astro Tri RGB filter for my 294c which caputures Ha, Oiii, Sii and Nii all in one go. How would I extract the Ha, Oiii, Sii from the one image in PixInsight? Thank you.
Very good video as usual, Cuiv!!! 👍👍One doubt: if you make the OIII channel as 0.5 of one filter and 0.5 of the other, aren't you dropping half of the signal? I mean if you have one hour exposure on one filter and another one on the other, you should sum channel, not taking 50% of them... or am I wrong? 🤔
This is effectively the sum vs average debate - if your system has enough precision, you can interchangeably go from one to the other.
In one scenario you have 0.5*OIII1 + 0.5*OIII2 = 0.5*(OIII1 + OIII2), in the other you have OIII1 + OIII2.
You can go at will between the two by multiplying or dividing by 0.5. So one is a scaled version of the other, e.g. they are essentially the same!
However, if you have some bright stars in both images, it's possible the simple sum of those bright pixels would overflow, and thus saturate the pixels, thus losing information. In that case, the equivalency is no longer true!
So in an ideal world, the sum and the average contain the same information, can go from one to another at will. But in the real world we need to be careful, and direct sum can cause loss of data!
@@CuivTheLazyGeek thank you for the answer! Probably for not overflowing pixels, could be useful sum the two OIII without stars?
Thank you for sharing!! Do you have a capture process handy.. i am very new at this. Thank you
Absolutely nice video...as usual. Cuiv, you make being lazy, easy...and that is worth a lot!!!
I'm a OSC guy wanting to migrate to mono...but not ready to shell out the money for >another< camera and a filter wheel. Would you recommend the "C set" or the "D set"? If the "D-set" would get me really close to transitioning to full mono, I might go with that and pause on full-mono...on the other hand, if full-mono totally outweighs even the D-set, then maybe I'd go with the C-set as a bridge towards full-mono. Whaddya say...?
Thanks for that processing video, good to know.i just bought the Askar D1 and D2 filter, so anxious to see your review with those. Did a few imaging sessions with the D1 and so far there're great. A big difference from my zwo duoband filter. Just a question though, do you take the same amount of data with both filter or one less than the other and if so witch one. Thanks
I tend to favor the SII OIII filter, simply because SII is much weaker than Ha, so it makes sense to gather more data!
Hi Cuiv, great video as always.. and with many more learning on how to process the image.
need a bit of help: I'm using D1 and D2 filters. my stacked D1 and D2 images are not aligned (I suspect different exposure time + deietering). so when I combine the two Oiii I get a "double-vision" image.
so my problem is in preparation and integration of the lights coming from the 2 filters. how do I align the images from the 2 filters to match ?
You use the StarAlignment Process to align both images as the very first step of processing!
Awesome video, as usual! I'm watching these even without being able to use filters with my rig 😂.
I have one question: wouldn't it make sense to keep all images linear up until the pixel math recombination? My uneducated guess is that it would provide more headroom for the creative part of the process. Does it make sense?
Thanks again for your nerdiness and passion, it's truly inspiring 🙏
You're absolutely right, you could do Linear Fit followed by Pixel Math in linear stage - but for whatever reason I then have more trouble processing the images... But that's probably just me!
Wanted to fact check one detail: in the beginning you mention the green tint is because the osc camera has twice as many green pixels, and while you’re later correct that the green channel has better signal than blue because of that, the intensity difference is not. In fact it’s a combination of the light pollution profile and the green pixels’ sensitivity. When you demosaic the image, the average adu values don’t change depending on if there’s 1/2 or 1/4 pixels.
Have you been using cfa drizzle for the final integration? I’m shooting with similar setup and that gives a distinctively sharper result in the pixel scale especially when combined with bxt.
Another great video Cuiv, merci ! I was wondering about the OIII combination - I have the feeling that averaging the 2 OIII (and even de blue and green) those not really take advantage of gathering twice the OIII data as you don't double your integration or SNR with that technique. Did you give any though on different combination type such as making a sum of the two or a max? Also, why don't jtst use the green channel if it has a better SNR ? Mixing it with the blue look like we're just degrading it or I am perhaps missing something? These are questions I have myself and would love to hear you opinion on this topic:) Thanks!
These are great questions, and I don't really have an answer! Using the Max of the two or just the green channel is definitely a good idea, although the Blue channel is less affected by Ha filtering through... it's really to taste!
Hey professor Cuiv thank you for the tutorial, hope i can get a clear night soon to try it out. One question, do you need to prestretch them before doing the lrgb recombination? Or could you combine them linear and then use GHS for the combined?
The official recommendation from PI is to stretch prior to LRGB combination - but there are other ways such as using PixelMath instead, prior to stretching!
I don’t know if you reply to old videos, but I do have a question. I purchased D1 and D2 filters some months ago, but finally was able to get out on 1 July and then not until 24 August to image with them (and you thought you had bad weather in Tokyo!). I use the D1 in July and D the D2 filter in August. So, am I going to need to these images before I process? I’m sure that they’re fairly close to being aligned, but not perfectly. Thanks!
Thank you for doing this and I can't wait to see your comparison with the more expensive D1 & D2. I'll be really disappointed if I could have gotten away with the much cheaper C series. 😄
Spoiler alert: the D1 and D2 are definitely superior, especially in light polluted cirties!
Thanks for the excellent tutorial Cuiv - consistently great content from you, I've learnt so much over the last year or so. I've a D1/D2 combo BNIB here so will be trying them out with this workflow if the clouds ever leave the Highlands of Scotland...
Do you see any role for GHS in your workflow?
Glad this is helpful! I'm still not a big fan of GHS... because it requires too much manual work, and I keep second guessing myself!
Just got Askar D SII OII to go with my Antlia ALPT and excited to watch this video as now its the hard part lol
I prefer to use Foraxx palet utility than narrowband neutralization for SHO composition it's provide better results for sure
It really depends on the target though!
Hope to see siril’s tutorial, thank you ♥
Another question...what's the difference in separating app channels and merging them back together in LRGB color combo, vs going straight to narrowband normalization from the beginning? Is the outcome much different?
How necessary is PixInsight to getting these results? Is the advantage over something like Siril just ease or convenience? Or can you simply not get the same results?
It's difficult for me to say, I haven't used Siril for such palettes...
Were the two filters stacked separately or was everything stacked together using groups to sort them into separate stacks?
@cuivlazygeek - How does one handle differences in field rotation between the two image captures using the Ha/OIII and SII/OIII filters? I am looking to combine a new SII/OIII image with an old Ha/OIII image and when conducting the pixel math to merge the two OIII images I discovered that the field is rotated slightly between the two images, yielding a spiral looking result. Is there an easy way within PI to force the orientation of the original two images to be identical? Note that I first registered and stacked using DSS for each image separately. Thanks in advance.
Use the StarAlignment process in PI!
Perfect - Thank you Cuiv!
I already have the dual band filter (L-eXtreme) from Optolong. They do not seem to make the corresponding OiiiSii… Looking at your processing, the Oiii2 seems a bit redundant. A OiiiHa + Sii should get a similar result in the end. Did you try this?
Redundant, but lets us have more SNR for OIII (specifically increase SNR by a factor of sqrt(2) assuming same exposure time for both filters)
What would happen if you ran HDRM on the individual channels before combining? Bad idea? I’m definitely getting the C1/C2 or D1/D2 filters. Just how much $$ do I want to part with??
That can work - or combine in linear state with PixInsight!
@@CuivTheLazyGeek yes, and I need to rewatch this one again. My poor ol’ partially fossilized brain can only absorb so much!
And, do you see an issue with mixing filter manufacturers? I have an Optolong L-eXtreme and thinking about buying a Askar Oiii/Sii (D2, I believe). Thanks!
Hello Cuiv, Thank you for sharing all your knowledge. I am running into a problem, and I hope you can help. I took 80 pics (5 minutes) with the D1 and 80 pics (5 minutes) with the D2 filters. The D1 was first over many nights, and then the D2 over many nights. I have a framing error (I think) because when I try to add (OIII1*0.5+OIII2*0.5), the stars do not line up, yielding an image with double starts. I used integration, which worked, but my issue now is that when trying to use the LRGB combination, the image I created (oiii) does not appear on the list. I hope you have a simple solution. Thank you for your excellent support.
Cuiv, I figured it out. I performed a start alignment before splitting the channels and that took care of the issue. Happy Memorial day!!
I only wish there was a solution for HO and SO on fast systems like a RASA.
Yeah the C1 and C2 filters are decent for that since they have such wide bandpasses, and the D1 and D2 filters I'm currently testing should also perform well!
What specs is your PC to where the rc astro things work so fast. Mine take 10min or so for blurx and starx
can you post your master images so I can follow along and process the same data as you do in the video?
👍👍!! Hmm, the bigger part of the word astrophotography is 'photography', Which is art, not just art though, also documentation data for later analysis.... Which came first, the chicken or the egg? 😜
Can the C1/2 filters be used with fast optics? We have terrible weather here in the UK so im trying to put together a system to capture as much data in the few nights a month we get of clear weather.
Yes, because the bandpasses are so wide!
@@CuivTheLazyGeek Amazing, thank you so much!
When I combine the two different OIII1 + OIII2 my image is now blurry? Any idea of what happened?
Are the images aligned to one another?
Are their triband filters?
Excellent 👌…! Cool 👍…
@Cuiv What do you use for your deep sky stacking?
Pixinsight
Yep, PixInsight :)
Thanks!
No problem!
Am I in a dream or has the Mike Cranfield / EDIT Adam Block ImageBlend script, just copied the photoshop 'blend mode' function wholesale? 😆This includes the daft names which bare no relation to what they do! I'm not complaining; I thought I was the one and only keeper of the secret that the photoshop 'select all' >'copy', 'paste' >'select blend mode' Done! is hands down, the best, fastest and most accurate way to put stars back into an image; certainly better than pixelmath which appears to me to bleach the 'Starless' image. Anyway, I'm very happy now, that I no longer have to load up photoshop to do this!
I need to have a look at it! I thought blend mode had the same option names in Gimp as well though!
@@CuivTheLazyGeek I think you're right. Someone else pointed this out too. I guess Adobe don't have any ownership of those terms, which I recognise as being based on film development terms. I just thought they were being naughty in the layout and presentation of the tool, looking somewhat familiar.
@@CuivTheLazyGeek maybe I'm deluding myself, but I am also lazy and always looking for shortcuts.. so I was wondering what would happen if we used this merging script to simply merge the 2 outputs from the stack. I.e. the HAOIII and the SIIOIII, WITHOUT any further stages such as separating out RGB, etc. If this script works well, would a simple addition of the two frames not also result in 'double' OIII? Or would this also double noise?
Clever.
Nicee!
Thank you! Cheers!
Now if someone would just chase those clouds away, cover the Moon and redo this tutorial in Siril
Hahaha I have the power to do the first two, but I'm not good enough at Siril for the last one!
You make such good content in this hobby, you should have more subs. With no offense, I recommend maybe changing the channel name to 'Cuiv, The Lazy Astrophotographer' or 'Astronomer.' Basically, any name with 'astro' in it.
Yeah I know... it's a good idea, but I wonder if I should... A lot of people identify me with lazy geek by now... Maybe Cuiv Astrogeek...
@@CuivTheLazyGeekYeah, I like that one. Do it….no fear.
Very useful video, but personally I was hoping for a true Hubble palette colour, so I could use the technique on my own data, the colours you ended up with are not to pleasing to my eye….sorry…
You can always play with the color balance and hue once you're done to be however you'd like!
@@paulv5924 yes I realise that, but when Cuiv used the narrowband normalisation script and hit the SHO option, the colour was way off, and not what I expected at all even with the good data he had, it should have been much closer to SHO palette than it was TBH, I was quite supprised…