I’m glad, Ray! I had always seen those equations and thought “what!?”. But when you take them apart, there’s a reason behind the madness. Thanks for watching!
Once again thanks James. I recently bought an EdgeHD 8” and 294mm and efw. Have been using OSC cameras before. Went back to check this out. I was creating a mask and applying Ha to the RGB combined image as a percent of the new image (between 20-60). It actually worked quite well. This approach makes more sense and probably easier (and more accurate) to pull out just the Ha data and apply to the Red channel. Will give it a try. Thanks
@@astrounclejoe2572 Haha. I know! And when I said If you have a process that produces good results already, I was referring to your approach! Play around and see what you like. Drop me a comment with your conclusions. You might end up liking what you were doing already. There are MANY roads to basically the same place.
James, thanks for showing this method! Question, if you perform a linear fit on the Ha to the Red channel, would this eliminate any need to scale the data via a formula? I have never tested this, nor have I investigated the process of linear fit I this way, but I wonder if this would stretch the extents of the ha data to be more linear to the red, then subtract the difference and add back into red??
No, linear fit is the opposite of what we would need to apply the original formula. When we image, we're always trying to expose long enough to get the peak of the histogram just off of the left side. In effect, we are doing a (poor) linear fit by setting the exposure time and gain. With a broadband filter, I use a lower gain and shorter exposure because so much light is coming in. When I use a narrowband filter, I have to increase the gain and use a longer exposure to put the peak of the histogram at about the same location. If I want to apply the original [Ha - R Nha/Hr] formula, I should use the same gain and exposure for the Ha as when shooting Red. In this case, the Ha would be a very low signal indeed (i.e., the ha peak would be much more to the left than the red peak). Linear fit aligns the peaks.
Love your videos as always, the detail and rigour are very pleasing. I am definitely going to give this a try, but one thing I wondered about was whether, in the R-subtracted Ha data, it would be worth using StarNet to remove the residual stars so as not to impact on them in the final combination? I know StarNet only works on non-linear images, but I’ve seen methods where the image is stretched, StarNet is run and the image is reverse-stretched back to linear. I have some M33 data that I’ve been meaning to work through and your video has given me the nudge to get on and do it!
Hey Terry. I'm a big fan of Starnet and generally use it to remove stars and then combine starless channels for narrowband or broadband processing. I have also combined Ha to Red in the nonlinear space. That can work too (but I think combining in the linear state is better). I like the idea of going back to the linear state after removing the stars. If you use a single nonlinear expression, it certainly should be possible to reverse the process (I tend to do an iterative stretch, adjust black point then repeat process which I suspect in irreversible) with pixelmath. I'm not claiming the process shown in the vid is "the best" approach, I just wanted to revisit it since many people do use it, show the concept it is based on, and why we have to play with the parameters. And then streamline the process by removing unnecessary constants in the equations. Thanks for watching!
Hi James, the video was thought-provoking which I really love. I have spent a lot of time up to now trying to improve my data acquisition, so next I need to improve my processing, and this has given me lots to think about. Thanks again.
Interesting process. Any idea how this would compare to the existing NBLRGBCobmination script? I believe it tries to do something similar to what you've outlined.
Agreed. I suspect the script is based on the fundamental equation shown in the video (since it came from Pixinsight in the first place). I have never found a one-size-fits-all solution. They all seem to require some tweaking for each project. Thanks for watching, Kyle!
Hello James, your pixelmath formula works well in the RGB Image. But when i want to add my luminance data to the HaRGB Image all the Ha nebulas went pinkish. Can i also dorthin formula with the LRGB nonlinear Image ? Or is there any other Method where i can do it with the combined LRGB Image ?
There are several approaches. First, once you get your "new Ha" image, you can add it to the Lum data kind of like you add it to the Red channel. This will ensure that any additional detail in the Ha shows up in Lum and in the final detailed LRGB image. As for color, you might try adding the "New Ha" to the Blue channel as well (maybe not at the same strength as in Red. Maybe 60%ish). This will tend to leave you with more of a magenta for the Ha contribution.
Hello James, thank you for answering. But im a totally beginnen in Pixinsight and i don’t Know which formula i have to enter in pixelmath. Can you show me how the formula have to Look in Pixinsight for adding the newHa to the luminance? Thx in advance
Right. So use the expression: Ha - R*s1 to get the NewHa. In this case, your Ha image is called "Ha" in the image identifier tab on the upper left of the window and your red image is labeled as "R" in the image identifier. Try different values of s1 (say, between .2 to .5 per the video). Once you have a new image labeled "NewHa", you can add it to other images. R + (NewHa - med(NewHa))*s2 for a new red channel with Ha (play with s2) or B + (NewHa - med(nedHa))*s3 for a new blue channel with Ha (maybe let s3 = s2*0.6) To add Ha to Lum, use Lum + (NewHa - med(NewHa))*s4 Here we assumed that the luminance image is named "Lum" in the image identifier. You can adjust s4 (s4 should be set to something like "s4=1.0;" in the symbols tab of PixelMath. As an alternative to the above, you can "blend" images together. The PixelMath formula to blend Ha and Lum, for example, is: 1 - (1 - Lum)*(1 - (NewHa-med(NewHa))*s4) hope that helps!
I don't think so. This method is about combining data taken with a narrowband filter to data from a broadband filter that covers the narrowband filter's bandwidth. All is not lost however. There is a good tool in Pixinsight for what you want to do. It's the LRGBcombination tool. Open the tool and uncheck the R, G, and B lines. Then place your Lum data filename in the L slot. Then drag and drop the triangle on your color image (might want to make a clone of the color image first). For added fun, check the chrominance noise reduction and play with the saturation and lightness sliders. If you want a brighter image move the lightness slider to the left. If you want more color saturation, move the saturation slider to the left.
James - I wanted you to know that his video helped me a lot. The key for me was the explanation and the discussion of the formulas. Thanks, Ray
I’m glad, Ray! I had always seen those equations and thought “what!?”. But when you take them apart, there’s a reason behind the madness. Thanks for watching!
You've done it again! I love how you are constantly asking many of the right questions that are often ignored. Fantastic work!
Thanks for watching!
Dennis Michels l can't thank you enough for making the complexitys of this endeavor understandable. Please keep up the good work.
Glad you found this vid useful, Dennis!
I have watched quite a few of your videos & always appreciate your logical, analytical, and realistic approach to this hobby. Always helpful info.
Thanks for watching the videos, David! Much appreciated.
Thanks so much! Just what I needed!
@@andrewoler1 Thanks! I hope it helps. Lots of different ways to play the game.
Really well done, thank you. FYI, "ViNcent PerEZ" is Vicent Peris. Thanks
Thanks for watching and, especially, for the correct pronunciation! Wish I had a time machine to go back and fix all of my errors. Haha.
Once again thanks James. I recently bought an EdgeHD 8” and 294mm and efw. Have been using OSC cameras before. Went back to check this out.
I was creating a mask and applying Ha to the RGB combined image as a percent of the new image (between 20-60). It actually worked quite well. This approach makes more sense and probably easier (and more accurate) to pull out just the Ha data and apply to the Red channel. Will give it a try. Thanks
Thanks for watching! Hey, any method that gives you results you like is a good method.
@@Aero19612 to checking and to be clear. When I said this approach makes more sense, I was referring to yours :)
@@astrounclejoe2572 Haha. I know! And when I said If you have a process that produces good results already, I was referring to your approach! Play around and see what you like. Drop me a comment with your conclusions. You might end up liking what you were doing already. There are MANY roads to basically the same place.
Thanks alot. Finally found a way that works on my osc data, rgb and l-ultimate. :)
There are various approaches out there. I find this approach, or variants of it, seem to be flexible. Thanks for watching!
Works like a charm. Thanks!
Great! Thanks for watching
Very interesting as always. Hope to use the technique in the M51 data I have more or less captured over the past few weeks.
Excellent! I hope it works for you. Just experiment until you're happy with the result.
very well presented as usual
Thanks for watching. Rob!
For sure you are the best!
Haha. Not even close, Kayed. Thanks for watching!
Thank you very much
Thanks for watching, Mohammad! Hope the procedure works well for you. I think Ha makes a big difference to an RGB image. Good luck!
James, thanks for showing this method! Question, if you perform a linear fit on the Ha to the Red channel, would this eliminate any need to scale the data via a formula? I have never tested this, nor have I investigated the process of linear fit I this way, but I wonder if this would stretch the extents of the ha data to be more linear to the red, then subtract the difference and add back into red??
No, linear fit is the opposite of what we would need to apply the original formula. When we image, we're always trying to expose long enough to get the peak of the histogram just off of the left side. In effect, we are doing a (poor) linear fit by setting the exposure time and gain. With a broadband filter, I use a lower gain and shorter exposure because so much light is coming in. When I use a narrowband filter, I have to increase the gain and use a longer exposure to put the peak of the histogram at about the same location. If I want to apply the original [Ha - R Nha/Hr] formula, I should use the same gain and exposure for the Ha as when shooting Red. In this case, the Ha would be a very low signal indeed (i.e., the ha peak would be much more to the left than the red peak). Linear fit aligns the peaks.
Love your videos as always, the detail and rigour are very pleasing. I am definitely going to give this a try, but one thing I wondered about was whether, in the R-subtracted Ha data, it would be worth using StarNet to remove the residual stars so as not to impact on them in the final combination? I know StarNet only works on non-linear images, but I’ve seen methods where the image is stretched, StarNet is run and the image is reverse-stretched back to linear. I have some M33 data that I’ve been meaning to work through and your video has given me the nudge to get on and do it!
Hey Terry. I'm a big fan of Starnet and generally use it to remove stars and then combine starless channels for narrowband or broadband processing. I have also combined Ha to Red in the nonlinear space. That can work too (but I think combining in the linear state is better). I like the idea of going back to the linear state after removing the stars. If you use a single nonlinear expression, it certainly should be possible to reverse the process (I tend to do an iterative stretch, adjust black point then repeat process which I suspect in irreversible) with pixelmath. I'm not claiming the process shown in the vid is "the best" approach, I just wanted to revisit it since many people do use it, show the concept it is based on, and why we have to play with the parameters. And then streamline the process by removing unnecessary constants in the equations. Thanks for watching!
Hi James, the video was thought-provoking which I really love. I have spent a lot of time up to now trying to improve my data acquisition, so next I need to improve my processing, and this has given me lots to think about. Thanks again.
Interesting process. Any idea how this would compare to the existing NBLRGBCobmination script? I believe it tries to do something similar to what you've outlined.
Agreed. I suspect the script is based on the fundamental equation shown in the video (since it came from Pixinsight in the first place). I have never found a one-size-fits-all solution. They all seem to require some tweaking for each project. Thanks for watching, Kyle!
Boom! You iza top man - Thank you!
Thanks for watching, Chaz!
Hello James, your pixelmath formula works well in the RGB Image. But when i want to add my luminance data to the HaRGB Image all the Ha nebulas went pinkish. Can i also dorthin formula with the LRGB nonlinear Image ? Or is there any other Method where i can do it with the combined LRGB Image ?
There are several approaches. First, once you get your "new Ha" image, you can add it to the Lum data kind of like you add it to the Red channel. This will ensure that any additional detail in the Ha shows up in Lum and in the final detailed LRGB image. As for color, you might try adding the "New Ha" to the Blue channel as well (maybe not at the same strength as in Red. Maybe 60%ish). This will tend to leave you with more of a magenta for the Ha contribution.
Hello James, thank you for answering. But im a totally beginnen in Pixinsight and i don’t Know which formula i have to enter in pixelmath. Can you show me how the formula have to Look in Pixinsight for adding the newHa to the luminance? Thx in advance
Right. So use the expression:
Ha - R*s1
to get the NewHa. In this case, your Ha image is called "Ha" in the image identifier tab on the upper left of the window and your red image is labeled as "R" in the image identifier. Try different values of s1 (say, between .2 to .5 per the video).
Once you have a new image labeled "NewHa", you can add it to other images.
R + (NewHa - med(NewHa))*s2 for a new red channel with Ha (play with s2) or
B + (NewHa - med(nedHa))*s3 for a new blue channel with Ha (maybe let s3 = s2*0.6)
To add Ha to Lum, use
Lum + (NewHa - med(NewHa))*s4
Here we assumed that the luminance image is named "Lum" in the image identifier. You can adjust s4 (s4 should be set to something like "s4=1.0;" in the symbols tab of PixelMath.
As an alternative to the above, you can "blend" images together. The PixelMath formula to blend Ha and Lum, for example, is:
1 - (1 - Lum)*(1 - (NewHa-med(NewHa))*s4)
hope that helps!
@@Aero19612 hi James, this is Awesome! Thank you so much for helping me out! Subbeb to your excellent work!
I know this is a little old but... I can use this process to add LUM to my existing M33 image? Thanks.
I don't think so. This method is about combining data taken with a narrowband filter to data from a broadband filter that covers the narrowband filter's bandwidth.
All is not lost however. There is a good tool in Pixinsight for what you want to do. It's the LRGBcombination tool. Open the tool and uncheck the R, G, and B lines. Then place your Lum data filename in the L slot. Then drag and drop the triangle on your color image (might want to make a clone of the color image first). For added fun, check the chrominance noise reduction and play with the saturation and lightness sliders. If you want a brighter image move the lightness slider to the left. If you want more color saturation, move the saturation slider to the left.
Thanks
You bet!