How to Tell If Your Image Needs More Integration Time

Поделиться
HTML-код
  • Опубликовано: 5 ноя 2024

Комментарии • 38

  • @TheGoKidd
    @TheGoKidd 2 дня назад +5

    Cliff, this was EXACTLY what I needed to hear at my stage of development. Thank you so very much. About two years into my journey in this activity, I've discovered that my processing of a data-starved version of an image feels entirely different than processing a data-rich version. Now, if I find myself trying too hard to pull detail out of an image, I'll know that I just need to go back for more data. But, I still process after every session ... one never leaves Christmas presents unopened!

    • @SKYST0RY
      @SKYST0RY  2 дня назад +1

      That's how I feel. I can always see what the results of last night's additional information will be.

  • @deepskydetail
    @deepskydetail День назад +1

    Great video! In the end, your eyes/brain are great judges of when to stop, which your video demonstrates so clearly. I also really wish I had your skies! Thank you!

    • @SKYST0RY
      @SKYST0RY  День назад +1

      Good sky does help, especially with LRGB which will bring out every good thing in an image, and also emphasize every problem with the sky.

  • @BurgerOosthuizen
    @BurgerOosthuizen 2 дня назад +2

    Wow, what a story, how exciting! We are on a journey with you 😁

  • @Seafox0011
    @Seafox0011 2 дня назад +1

    Great counter point to our times of the perceived need of instant gratification. A patient investment of time and some fortune in the seeing and technical capabilities of equipment ... bring rewards. Wonderful exposition of how making that investment in time, reveals heavily gems.

  • @annikasoraya4322
    @annikasoraya4322 2 дня назад +1

    Weldone buddy!
    G8 stuff as per usual
    Take care,
    Annika
    🔮🍹🔭

  • @danielpetzen
    @danielpetzen 2 дня назад +1

    Thanks for another engaging and educational video. This was really interesting to watch. I've done a few widefield (I'm originally an SCT user, so most refractors are wide field to me) and noted that brighter nebulosity are extremely sharp, while darker areas lack details. I though it was integration time, and your video confirms it.
    I often use signal to noise ratio (in Db) to gauge the sharpness of an image. I've primarily used it to test out integration algorithms (Lanzcos-3 x2 is giving the best results for my refractor), but also the effects of integration time. I may even do a graph over SNR increase with integration time some time.
    Again, thanks for a great video!

    • @SKYST0RY
      @SKYST0RY  2 дня назад +1

      I'm going to do a video on what SNR is and what it's good for in a future video. I had thought about including it in this video but it would have added a lot of time to this video, and it's somewhat tangential. SNR is not a great tool for determining if an image is finished. SNR tools don't know what is signal and what is noise, they are simply algorithms that generally define noise as information that doesn't fit into patterns. The algorithm is inherently subject to inaccuracy. Also, SNR is a ratio that attempts to define signal to noise. It is not a ratio that attempts to define completed image (or all possible information collected). SNR cannot tell you if an image is "done". Imagine an image with no information equals 0, and an image with complete information equals 1. Every time you assess your image, you are looking at a snapshot of where the information now stands. SNR might tell you roughly what your signal vs noise ratio is at each snapshot but it does not know what is "1" (a complete image), a condition of all possible information acquired. Dylan O'Donell did a good video on this about a month ago. ruclips.net/video/T7MSwo-V2vE/видео.htmlsi=Lrnime288j4e6FI0

    • @Z3ph0d_videos
      @Z3ph0d_videos 2 дня назад

      @@SKYST0RY You're absolutely right, and I'm kicking myself for not adding a caveat around SNR being exactly what you say: a simple mathematical calculation that just lumps all data together, rather than taking features (patterns) and what really make up an image into account.
      I've seen that excellent video with Dylan O'Donnel, which is the reason I've used SNR with a huge pinch of salt. I have to admit that I have used it to decide with type of integration to use for the same image (when I can't tell by visual inspection).
      Even though Dylan's video is great, I would love your take on it and perhaps exploring better ways of getting metrics around image quality relating to integration time and integration algorithms.
      [Note: RUclips grabbed the wrong account. This is still @danielpetzen]

    • @SKYST0RY
      @SKYST0RY  2 дня назад +1

      @@Z3ph0d_videos Metrics is a tough one for this. To me, it's like defining beauty. How does one define it? It's why I just visually judge an image and keep adding integration time till it's reached my goal for clarity. And some areas I'll leave intentionally not fully resolved to draw the eye to the area of interest, as in NGC 1333, where the roughly S-shaped line follows the light up the center of the nebulosity. I wanted the surrounding area a little unresolved to guide the eye.
      One handicap I have in terms of helping persons find strategies to deal with noise is there is no light pollution where I live, so I only have to deal with read noise, and with modern sensors that's virtually nothing. It's not an area I have a lot of practical experience with. However, I would nevertheless advise stacking and developing the data and then making a visual determination of whether you have reached your goal. I have found that good developing strategies can push the inherent potential of information farther than is often estimated. For example, the image of the Horsehead in this video only had 3 hours of low frequency integration in it, and about 9 hours of high frequency integration, six of which were gathered during a full moon night with the Horsehead

  • @jimcarter2092
    @jimcarter2092 2 дня назад +1

    Excellent video! Thank you for posting!

  • @KJRitch
    @KJRitch 2 дня назад +1

    As a newbie it’s a question of quantity over quality. Over the past year I’ve enjoyed switching targets after only 6 hours of integration. Mostly galaxies with my C8 at F7.1 with a ASI071MC. Now I’m shooting some nebula and I’m finding more integration is required than galaxies with the same setup. My processing skills are still beginner level, this part of the hobby harder for me so having more targets to process helps me learn more to where I can go back to earlier data and try again.

    • @SKYST0RY
      @SKYST0RY  2 дня назад

      I think a lot of us start that way. I remember getting back into astrophotography after a long university hiaitus, and being so excited about it that I was shooting several targets a night. It's perfectly normal. After a while, my gears shifted toward a focus on making each image the best possible. That only happens with more integration time. Of course, if you have more than mount, camera and telescope, you can collect those photons faster.

  • @barnaclewatcher4060
    @barnaclewatcher4060 2 дня назад +1

    Nice video. Thanks for making great content.

  • @posmond19664
    @posmond19664 День назад

    For me, as a general rule, when I add more integration I use the Fibonacci sequence as a guide. To bring out a noticeable improvement in an image use the next number in the sequence, so if you already have 2 hours, get at least 3, if you have 3 then get at least 5, then at least 8, etc. More simply you could just double, 2, 4, 8, 16. If you have already 8 hours, going to 9 you will barely notice any change in the image, but going to 14, 15 or 16 hours will take the image to the next level.

    • @SKYST0RY
      @SKYST0RY  7 часов назад

      It may well be a good rule. I have found information reaches a kind of critical mass. I can have a night of collecting several hours of good integration that makes only a small improvement. Then another night get only a couple more hours of integration and the image becomes drastically better. I am still trying to figure out those "critical mass of information" mechanics.

  • @TheDostergaard
    @TheDostergaard 2 дня назад +2

    Another great practical and informative video! Obviously you have to periodically process these images to make an evaluation of whether or not it needs more integration. Do you develop those interim images as if they might be "finished" or just enough to make a determination? You often are able to estimate how much more integration will be needed which I guess comes with experience. Do you develop these images after every session or only when you think you've collected enough additional data?

    • @SKYST0RY
      @SKYST0RY  2 дня назад +1

      That's a good question. I will usually develop every 6 or more hours worth of data. In some cases, like with the HorseHead Project, I will develop every night worth of data to carefully track the progress, even if it's just a couple hours. Sometimes I have found a mere couple hours of integration can make a massive difference. It's as if information reaches certain critical points where the addition of a small amount more makes a huge improvement. Experience has taught me that bright emission nebulae may be finished in as little 8 hours, bright nebulae with dark details will take twice that, dim nebula will usually require 16 or more hours, and dark nebulae should get 20-30 hours. (This is shooting from a B1.5 area in LRGB. Persons shooting NB in light polluted areas may need to increase this time from 2x to 5x).
      If the development reveals faded structure or color, a solid clue that more integration time could benefit the image, I won't take the time to derive a fully finished image unless I have a reason to (like making a video).

  • @robb7342
    @robb7342 2 дня назад +1

    Excellent video that affirmed my mindset with regards to detail and image grain when zooming in. Never really thought about colour saturation, so I'll have to keep that in mind. Raises the question with regards to shooting LRGB and the ratio of L vs RGB when shooting. Many recommend 3:1 for more detail, but I can't help wonder how that impacts overall colour saturation. Wondering what your experience and thoughts are?

    • @SKYST0RY
      @SKYST0RY  2 дня назад +1

      I find the following ratio works very reliably for everything: 60L: 20R:20G:20B. Sometimes, if in a rush to capture information, I'll shoot more L, or even just L since it captures information much faster than any individual color filter, and twice as fast as an OSC.

  • @dmitribovski1292
    @dmitribovski1292 2 дня назад +2

    The truth is you never have enough.

    • @SKYST0RY
      @SKYST0RY  2 дня назад

      There's a lot of truth to that.

  • @pompeymonkey3271
    @pompeymonkey3271 2 дня назад +1

    I'm going to pre-empt my comments by falling back to "But you can never have too much integration time!". I'll comment back after watching! :)
    Post watch edit: Yup :)
    Enjoy your mushrooms in front of the fire! :)

    • @SKYST0RY
      @SKYST0RY  2 дня назад

      You are right. You will always add a little more information with more integration time. Or at least for a long time. At some point, you have to make a subjective decision on when enough is enough.

  • @gregerianne3880
    @gregerianne3880 2 дня назад

    Nice set of criteria for determining if more integration time is needed. Thanks! It's basically folded into what you showed in the video, but do you ever use the noise level in a lightly processed image to help you determine if more time is needed?

    • @SKYST0RY
      @SKYST0RY  2 дня назад +2

      I never use SNR to determine if an image has received enough integration. I thought about incorporating why into this video but going into depth would have made the video much longer. It's a fairly complex and tangential topic. There are a couple reasons I can state succinctly. 1. SNR tools don't know what is signal and what is noise, they are simply algorithms that generally define noise as information that doesn't fit into patterns. The algorithm is inherently subject to inaccuracy. 2. SNR is a ratio that attempts to define signal to noise. It is not a ratio that attempts to define completed image (or all possible information collected). SNR cannot tell you if an image is "done". Imagine an image with no information yet equals 0, and an image with complete information equals 1. Every time you assess your image, you are looking at a snapshot of where the information now stands. SNR might tell you roughly what your signal vs noise ratio is at each snapshot but it does not know what is "1" or a complete image, a condition of all possible information acquired. Dylan O'Donell did a good video on this about a month ago. ruclips.net/video/T7MSwo-V2vE/видео.htmlsi=Lrnime288j4e6FI0

    • @gregerianne3880
      @gregerianne3880 2 дня назад +2

      @@SKYST0RY Thanks for the lengthy explanation! Makes sense. I appreciate it.

  • @scottrk4930
    @scottrk4930 2 дня назад

    As always , excellent info and video . Thanks ! Question...in case I've missed it , have you covered or disclosed your Monitor and Calibrations (I assume) that allows you to "peep" to this depth so reliably ? I'm always worried that my viewing setup may be the weak link in determining the direction of the development process . Luv to know what your setup is . Cheers from Ontario .

    • @SKYST0RY
      @SKYST0RY  2 дня назад +1

      I don't think I've gotten too deep into that yet. You should have at least a 4K monitor capable of Adobe RGB or Display P3 color. It should be calibrated but you can do that yourself. More resolution is better, if possible, but expensive and not strictly necessary. What often gets overlooked is making sure the monitor is capable of displaying true black and has a wide luminosity range, which helps avoid black crushing and banding in viewing. I am using a BenQ 3270U that I calibrated. I chose this monitor because its kind of LCD screen is capable of showing a very deep, rich black. Near true black.

    • @scottrk4930
      @scottrk4930 2 дня назад

      @@SKYST0RY Thanks for your reply . Would it be worth a future Video to cover Monitors and do a Calibration ?

    • @SKYST0RY
      @SKYST0RY  День назад

      @@scottrk4930 Yep! It has been on my to-do list for a while.

  • @ryanmichaelhaley
    @ryanmichaelhaley 2 дня назад

    I can only shoot about 3 hours a night because I shoot in my driveway and well, I need sleep. I try to get at least 10 hours on a target, but I am always eager to do more targets so I end up settling on 10 hours. I guess what I can do is revisit these targets the following year(s) and add more integration time. Is there any real benefit of adding two more nights of imaging to get 15 hours? I am always eager to move onto the next target, and I definitely don't want to do one target per season (well two since I run two telescopes). If I did two targets per season to push for 20 hours, I would only get 8 targets a year, which is a bit disappointing I think.

    • @SKYST0RY
      @SKYST0RY  2 дня назад

      Having to be up to watch your equipment is tough but understandable. There is always a benefit to adding more time, however. You never reach a condition of all possible information. It's more you keep adding integration till you're happy with the image. I have found sometimes even just adding a couple hours to an image that already has as much as 20 hours can make a big difference. It's as if the visual information reaches a critical mass that results in substantial improvements.

  • @ionfreefly
    @ionfreefly День назад

    What bortle are your skies?