Adding Ha to Red or Luminance Data

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 46

  • @rayhodel8069
    @rayhodel8069 2 года назад

    James - I wanted you to know that his video helped me a lot. The key for me was the explanation and the discussion of the formulas. Thanks, Ray

    • @Aero19612
      @Aero19612  2 года назад

      I’m glad, Ray! I had always seen those equations and thought “what!?”. But when you take them apart, there’s a reason behind the madness. Thanks for watching!

  • @csb0xc4rs
    @csb0xc4rs 4 года назад +2

    You've done it again! I love how you are constantly asking many of the right questions that are often ignored. Fantastic work!

    • @Aero19612
      @Aero19612  4 года назад

      Thanks for watching!

  • @dennismichels7194
    @dennismichels7194 4 года назад

    Dennis Michels l can't thank you enough for making the complexitys of this endeavor understandable. Please keep up the good work.

    • @Aero19612
      @Aero19612  4 года назад

      Glad you found this vid useful, Dennis!

  • @davidharris8037
    @davidharris8037 2 года назад

    I have watched quite a few of your videos & always appreciate your logical, analytical, and realistic approach to this hobby. Always helpful info.

    • @Aero19612
      @Aero19612  2 года назад

      Thanks for watching the videos, David! Much appreciated.

  • @andrewoler1
    @andrewoler1 3 месяца назад

    Thanks so much! Just what I needed!

    • @Aero19612
      @Aero19612  2 месяца назад

      @@andrewoler1 Thanks! I hope it helps. Lots of different ways to play the game.

  • @MastersofPixInsight
    @MastersofPixInsight Год назад

    Really well done, thank you. FYI, "ViNcent PerEZ" is Vicent Peris. Thanks

    • @Aero19612
      @Aero19612  Год назад +1

      Thanks for watching and, especially, for the correct pronunciation! Wish I had a time machine to go back and fix all of my errors. Haha.

  • @astrounclejoe2572
    @astrounclejoe2572 2 года назад

    Once again thanks James. I recently bought an EdgeHD 8” and 294mm and efw. Have been using OSC cameras before. Went back to check this out.
    I was creating a mask and applying Ha to the RGB combined image as a percent of the new image (between 20-60). It actually worked quite well. This approach makes more sense and probably easier (and more accurate) to pull out just the Ha data and apply to the Red channel. Will give it a try. Thanks

    • @Aero19612
      @Aero19612  2 года назад +1

      Thanks for watching! Hey, any method that gives you results you like is a good method.

    • @astrounclejoe2572
      @astrounclejoe2572 2 года назад

      @@Aero19612 to checking and to be clear. When I said this approach makes more sense, I was referring to yours :)

    • @Aero19612
      @Aero19612  2 года назад +1

      @@astrounclejoe2572 Haha. I know! And when I said If you have a process that produces good results already, I was referring to your approach! Play around and see what you like. Drop me a comment with your conclusions. You might end up liking what you were doing already. There are MANY roads to basically the same place.

  • @gunderstrmberg3501
    @gunderstrmberg3501 8 дней назад

    Thanks alot. Finally found a way that works on my osc data, rgb and l-ultimate. :)

    • @Aero19612
      @Aero19612  2 дня назад

      There are various approaches out there. I find this approach, or variants of it, seem to be flexible. Thanks for watching!

  • @zelodec
    @zelodec 2 года назад

    Works like a charm. Thanks!

    • @Aero19612
      @Aero19612  2 года назад

      Great! Thanks for watching

  • @chandrainsky
    @chandrainsky 3 года назад

    Very interesting as always. Hope to use the technique in the M51 data I have more or less captured over the past few weeks.

    • @Aero19612
      @Aero19612  3 года назад

      Excellent! I hope it works for you. Just experiment until you're happy with the result.

  • @slzckboy
    @slzckboy 2 года назад

    very well presented as usual

    • @Aero19612
      @Aero19612  2 года назад

      Thanks for watching. Rob!

  • @kayedsss
    @kayedsss 3 года назад

    For sure you are the best!

    • @Aero19612
      @Aero19612  3 года назад

      Haha. Not even close, Kayed. Thanks for watching!

  • @mohammadranjbaran1897
    @mohammadranjbaran1897 3 года назад

    Thank you very much

    • @Aero19612
      @Aero19612  3 года назад

      Thanks for watching, Mohammad! Hope the procedure works well for you. I think Ha makes a big difference to an RGB image. Good luck!

  • @billblanshan3021
    @billblanshan3021 4 года назад

    James, thanks for showing this method! Question, if you perform a linear fit on the Ha to the Red channel, would this eliminate any need to scale the data via a formula? I have never tested this, nor have I investigated the process of linear fit I this way, but I wonder if this would stretch the extents of the ha data to be more linear to the red, then subtract the difference and add back into red??

    • @Aero19612
      @Aero19612  4 года назад

      No, linear fit is the opposite of what we would need to apply the original formula. When we image, we're always trying to expose long enough to get the peak of the histogram just off of the left side. In effect, we are doing a (poor) linear fit by setting the exposure time and gain. With a broadband filter, I use a lower gain and shorter exposure because so much light is coming in. When I use a narrowband filter, I have to increase the gain and use a longer exposure to put the peak of the histogram at about the same location. If I want to apply the original [Ha - R Nha/Hr] formula, I should use the same gain and exposure for the Ha as when shooting Red. In this case, the Ha would be a very low signal indeed (i.e., the ha peak would be much more to the left than the red peak). Linear fit aligns the peaks.

  • @tezza0905
    @tezza0905 4 года назад

    Love your videos as always, the detail and rigour are very pleasing. I am definitely going to give this a try, but one thing I wondered about was whether, in the R-subtracted Ha data, it would be worth using StarNet to remove the residual stars so as not to impact on them in the final combination? I know StarNet only works on non-linear images, but I’ve seen methods where the image is stretched, StarNet is run and the image is reverse-stretched back to linear. I have some M33 data that I’ve been meaning to work through and your video has given me the nudge to get on and do it!

    • @Aero19612
      @Aero19612  4 года назад

      Hey Terry. I'm a big fan of Starnet and generally use it to remove stars and then combine starless channels for narrowband or broadband processing. I have also combined Ha to Red in the nonlinear space. That can work too (but I think combining in the linear state is better). I like the idea of going back to the linear state after removing the stars. If you use a single nonlinear expression, it certainly should be possible to reverse the process (I tend to do an iterative stretch, adjust black point then repeat process which I suspect in irreversible) with pixelmath. I'm not claiming the process shown in the vid is "the best" approach, I just wanted to revisit it since many people do use it, show the concept it is based on, and why we have to play with the parameters. And then streamline the process by removing unnecessary constants in the equations. Thanks for watching!

    • @tezza0905
      @tezza0905 4 года назад

      Hi James, the video was thought-provoking which I really love. I have spent a lot of time up to now trying to improve my data acquisition, so next I need to improve my processing, and this has given me lots to think about. Thanks again.

  • @Lasidar
    @Lasidar 4 года назад

    Interesting process. Any idea how this would compare to the existing NBLRGBCobmination script? I believe it tries to do something similar to what you've outlined.

    • @Aero19612
      @Aero19612  4 года назад

      Agreed. I suspect the script is based on the fundamental equation shown in the video (since it came from Pixinsight in the first place). I have never found a one-size-fits-all solution. They all seem to require some tweaking for each project. Thanks for watching, Kyle!

  • @chazparvez4970
    @chazparvez4970 3 года назад

    Boom! You iza top man - Thank you!

    • @Aero19612
      @Aero19612  3 года назад +1

      Thanks for watching, Chaz!

  • @apg.7461
    @apg.7461 2 года назад

    Hello James, your pixelmath formula works well in the RGB Image. But when i want to add my luminance data to the HaRGB Image all the Ha nebulas went pinkish. Can i also dorthin formula with the LRGB nonlinear Image ? Or is there any other Method where i can do it with the combined LRGB Image ?

    • @Aero19612
      @Aero19612  2 года назад

      There are several approaches. First, once you get your "new Ha" image, you can add it to the Lum data kind of like you add it to the Red channel. This will ensure that any additional detail in the Ha shows up in Lum and in the final detailed LRGB image. As for color, you might try adding the "New Ha" to the Blue channel as well (maybe not at the same strength as in Red. Maybe 60%ish). This will tend to leave you with more of a magenta for the Ha contribution.

    • @apg.7461
      @apg.7461 Год назад

      Hello James, thank you for answering. But im a totally beginnen in Pixinsight and i don’t Know which formula i have to enter in pixelmath. Can you show me how the formula have to Look in Pixinsight for adding the newHa to the luminance? Thx in advance

    • @Aero19612
      @Aero19612  Год назад

      Right. So use the expression:
      Ha - R*s1
      to get the NewHa. In this case, your Ha image is called "Ha" in the image identifier tab on the upper left of the window and your red image is labeled as "R" in the image identifier. Try different values of s1 (say, between .2 to .5 per the video).
      Once you have a new image labeled "NewHa", you can add it to other images.
      R + (NewHa - med(NewHa))*s2 for a new red channel with Ha (play with s2) or
      B + (NewHa - med(nedHa))*s3 for a new blue channel with Ha (maybe let s3 = s2*0.6)
      To add Ha to Lum, use
      Lum + (NewHa - med(NewHa))*s4
      Here we assumed that the luminance image is named "Lum" in the image identifier. You can adjust s4 (s4 should be set to something like "s4=1.0;" in the symbols tab of PixelMath.
      As an alternative to the above, you can "blend" images together. The PixelMath formula to blend Ha and Lum, for example, is:
      1 - (1 - Lum)*(1 - (NewHa-med(NewHa))*s4)
      hope that helps!

    • @apg.7461
      @apg.7461 Год назад

      @@Aero19612 hi James, this is Awesome! Thank you so much for helping me out! Subbeb to your excellent work!

  • @sanddollarastro8017
    @sanddollarastro8017 2 года назад

    I know this is a little old but... I can use this process to add LUM to my existing M33 image? Thanks.

    • @Aero19612
      @Aero19612  2 года назад +1

      I don't think so. This method is about combining data taken with a narrowband filter to data from a broadband filter that covers the narrowband filter's bandwidth.
      All is not lost however. There is a good tool in Pixinsight for what you want to do. It's the LRGBcombination tool. Open the tool and uncheck the R, G, and B lines. Then place your Lum data filename in the L slot. Then drag and drop the triangle on your color image (might want to make a clone of the color image first). For added fun, check the chrominance noise reduction and play with the saturation and lightness sliders. If you want a brighter image move the lightness slider to the left. If you want more color saturation, move the saturation slider to the left.

  • @wanderingquestions7501
    @wanderingquestions7501 3 года назад

    Thanks