Thanks! This app wasn't too difficult because the math was already in the original spreadsheet. It was just a matter of turning it into simple code. Plus, the R packages that I use to make the application make coding everything pretty simple.
Very interesting. Thank you for your work. I suspect, seeing most everyone uses some type of filter, the hours of integration/exposures will go up exponentially.
I hope this tool gets more developed over time. Such a great thing to have especially for beginners or even professionals. Also with the camera I have I don’t use bias frames but dark flats. Hopefully there will be an option for those in the future. Great work!!!
You should be able to just take some bias frames to get the calculations. I only use dark flats too, but taking a few bias frames is easy and the numbers should be valid for a while.
Hats off to you, sir! I've made a small donation to help the cause along, and to encourage others who might want to contribute to the astrophotography/astronomy community. Many thanks! PS - I also subscribed to your channel.
Thanks for the informative video. How would you gather the mean ADU value in PixInsight for a selected area? I know how to collect the mean value for the entire frame, but I need help finding how to do it for a designated area. Dynamic Crop, maybe?
Well done! One question: how is quantum efficiency of a camera taken into account? The shot noise originates from Poisson distribution of photons. Are you assuming that the quantum efficiency is something like 85% and then taking that into account?
I don't think quantum efficiency is needed to get the signal. Why? Because the total signal is being calculated from the light frame itself. If QE goes up for a camera, then you'll see a brighter image in your light frame (i.e., the ADU will be higher). In other words, the app will adjust the signal based on that based on the two samples you took from the light frame.
Thanks, this sounds like a great tool. Do you know how I would relate camera gain (I assume you are using an Astro-camera) to ISO on a one-shot-color camera?
Excellent work. One thing I can't quite get my head around - no matter what numbers I plug in based on my subs and biases, the app always suggests something in the order of hundreds or even thousands of hours needed, even to get a SNR or 75 or so. I haven't seen anything with my frames to suggest they are overly noisy - what might I be missing? i am also using Siril stats to get the values. I am happy to take 10 or even 20 hours worth over a few nights, but not hundreds :). Thanks again, this will be very valuable once I get it figured out!
@@deepskydetail SO sorry I missed your reply here - I actually did get it figured out (though I don't remember what the solution was at the moment). Much appreciated.
Fantastic work! What do you mean by bias signal. I do not take bias frames with a cmos camera. There are darks, lights, flats and flat darks. Please say more about getting the value needed to enter into your bias input.
A bias frame is same iso as your flats/Darks, shortest possible exposure. It contains basically only read noise. It's an alternative to a flat dark to calibrate noise out of your flats.
@@deepskydetail Thanks. I have been testing your very nicely built App, some parameters like read noise have little to no impact on the curves.? Neither does the bias signal? I know you based this on a spreadsheet set of equations, what are your thought on this?
@@pcboreland1 I've gotten similar questions on this. Basically, read noise should affect the curve, but it might not have a huge effect (especially if you have a lot of light pollution). The bias signal should have an effect, but the dark signal shouldn't. Basically, the skyglow noise estimate takes into account the dark signal (skyglow - dark) , and the total dark signal also takes into account the dark signal (i.e., dark - bias). So... in the end those two things cancel each other out in the math, so the dark signal is kind of redundant. But, they might eventually be relevant if I ever include more information in the app. The biggest impact is probably going to come from the target signal and the skyglow.
This is super cool! I use a Nikon D5300 to image, where can I find those camera stats for it, since I don’t use an astro cam that gives them to you on its website?
You should check out this video I made for DSLRs: ruclips.net/video/oBpAkHim7dY/видео.html It takes a bit of calculation to get the RN and gain for DSLRs but it is doable.
Than you. Very Interesting tool, I am using it. We can see from the graph that it can go beyond 100. So it means that at a point adding frames is no longer increasing the SNR. What to think abount people adding 100H ?
@@galactus012345 That's a good question, and I should have been clearer in the video. You can theoretically have infinite SNR if you have infinite integration time. It's a ratio, not a percentage, so it doesn't stop at 100.
Thank you for Videos and Apps! Is most appreciated. What makes me very sad ist to see the exposure time i need to get an SNR of 50 under Bortle 5 skies( Mag 20). 144 hours when i use 60s frames. But the exposure time depends a lot on how i frame when i get the statistics. I used a small galaxy NGC3184, and when i only framed a part of the galaxy i got an exposure time of 27 hours, when i framed in more than the galaxy, i got an exposure time of 144 hours! So what do you think regarding how to obtain statistics from light frames? I used Siril by the way.
Thanks! Wrt your questions, you really need to get a good sample of the sky background for things to give you an accurate measurement. Take the statistics of the sky background as close to the galaxy as you can, but making sure you don't include the galaxy in it. Also, the part of the galaxy you take a sample of also matters. The resulting SNR in the App is only for the sample that you took. So, if you took a sample from the outer arms of the galaxy, you'll need more imaging time to get the same SNR compared to the brighter core. Hope that makes sense. Also make sure you're using an "average" sub frame. One that isn't your best, but not your worst either. These things are important! I hope that answered the question.
Thank's for your reply! . I have tried to do as you described, and i find that the results make more sense. 23.45 hours under Bortle 5 skies for SNR 50 makes more sense for me. With the 10 hours of integration time i've got, i have to use Binning to achieve a fairly acceptable result. Thank you again! Question: Is it possible to measure the SNR direct in a stacked Fits file? Some time i wish i could do that to be able to compare stacks with different exposure times(60s vs 180s). Total time would be both the same.
@@HaakonRasmussen-r8o So, I think different stacking software will attempt to measure SNR of a stacked image. But, I'm not sure how well they work, and different software produce different results. I've measured the SNR in a few different videos myself (see my Bortle analysis, Ha filter analysis or SNR and stacking analysis videos). I'm thinking of making an app that can do that with help from Siril by using registered sub frames. But, as far as I know, you need to measure the SNR as you stack.
I think you'd need bias for accurate results. But they're really easy to take. The dwarf2 might have a function to let you do that (but I'm not sure). If not, I'd email the developers to get that functionality :)
Hello, @deepskydetail. I've been studying this stuff recently from de articles of Craig Stark, and been using the spreadsheet also (same one that appears in 1:40 of your video). I have a doubt, hope you can help me to see where I'm wrong. If I enter to the spreadsheet the very same data you show in your excel (and that is loaded automatically when opening the web app), the target SNR for a single image is 0,794 and then, to get to an SNR of 5, I'd need 40 images (this last calculation is also made by the spreadsheet itself in the Stacking Sheet). The problem is that this is not the value that the web app gives (it says I only need 33 subs). Where is the difference coming from? Thanks in advance. By the way, the web app is really amazing and helpful.
Thanks for looking into this!! Just fixed it based on your comment! I double checked and it seems like there was mistake in the code when calculating signal in electrons. It was throwing off the SNR of a single sub exposure a bit, which of course has compounding effects!
Hi! Changing the camera read noise value or changing the dark signal value appears to have no impact on the SNR curve. Is this expected behavior? In fact I can set both values to zero!
Hi, I just checked, and (1) changing the read noise value does change the SNR curve for me. Try increasing the read noise to something like 500 to see if it changes. It did for me. (2) After double checking, it is normal that changing the dark signal value does not change the curve. Basically, the skyglow noise estimate takes into account the dark signal (skyglow - dark) , and the total dark signal also takes into account the dark signal (i.e., dark - bias). So... in the end those two things cancel each other out in the math, so the dark signal is kind of redundant. But, they might eventually be relevant if I ever include more information in the app.
I must be doing something wrong. If I'm using pixinsight to grab the adu.. I make a preview of the faint and dark areas and then select statistics tool. Like you've got there in Siril. Problem is the numbers seem way off. I have the 16 bit selection of the drop down selected since that what my 2600mm is. I tried normalized and 14 bit since the numbers were closer to yours. Neither worked. The outputs I got were as follows Input 1: 3078 2: 1909 3: 12 4: 10.9 The camera stuff: 1: .21 2: 1.5 Desired SNR 90 Exposure duration 90 Halp!
It looks like to me the ADU values for the light frame might be off. The values seem to be off (too big of a difference maybe?). What is the target you are shooting and what is the light pollution in your area?
I'm having a bit of trouble figuring out which numbers go where. Is this correct? DSO + Skyglow Signal: 3078 Skyglow background: 1909 Dark Signal: 12 Bias Signal: 10.9 Camera Gain: 21 Camera's Read Noise: 1.5 If I put those numbers in, I get 9.7 hours to get an SNR of 90. But I think the dark and bias signal values may be off. Are those numbers in ADU? I would expect those to be a bit larger. And the camera gain should be lower (probably around 1.0 or so if you are at unity gain). Also, the difference between the DSO+Skyglow and Skyglow numbers seem a bit too large. I'd double check those and make sure you're getting an output in ADU and not some other unit.
I think most of your data will be from the L filter, so I would use that and be a bit conservative in the estimate. If you do want to include the RGB, I think you'd want to use a weighted mean depending on the % of frames that will be L vs RGB.
Siril's statistics show 4digit mean number but in Pixinsight, it shows like 00.000 Does anybody know how to change this number to 4 digit number like Siril to apply to this app?
I tried the link to calculate the SNR, but the "Camera Gain (e-/ADU)" does not make any sense to me. e-/ADU stands for "electrons per Analog-to-Digital Unit," which implies more sensitivity for lower values. The lowest read noise for my sensor is at gain 450, where the e-/ADU value is near 0 according to the specifications of my camera. But when I use 0 in your app, then I need infinite exposures according to your app. This does not make any sense to me because that does not match with my observations.
1) Camera gain is not read noise. e-/ADU basically means that each time a photon hits your sensor, how many electrons will it send to the processor. If it is 0, then you won't record any photons (as measured electrons). Higher numbers mean more sensitivity. 2) People generally use unity gain, or close to it. It just means that if a photon hits your sensor, it will record it as one electron. Higher gain values generally mean more shot noise*,̵ ̵w̵h̵i̵c̵h̵ ̵i̵s̵ ̵a̵ ̵s̵e̵p̵a̵r̵a̵t̵e̵ ̵m̵e̵t̵r̵i̵c̵ ̵y̵o̵u̵ ̵p̵u̵t̵ ̵i̵n̵ ̵a̵s̵ ̵"̵C̵a̵m̵e̵r̵a̵'̵s̵ ̵R̵e̵a̵d̵ ̵N̵o̵i̵s̵e̵ ̵(̵E̵l̵e̵c̵t̵r̵o̵n̵s̵)̵"̵.̵
@@deepskydetail You misunderstood me. I know that camera gain is not read noise, but I have a camera with the IMX585 sensor, and according to the specs, when a high gain of 450 is selected, then the e-/ADU is near 0. According to your explanation and app, then it will NOT record any photons; however, this setting is MAX gain, therefore it records the maximum number of photons possible. When I select a gain of 0, then the e-/ADU is about 12. According to your explanation, this is the most sensitive setting; however, this is the LEAST sensitive setting. Your explanation only makes sense if you talk about ADU/e- and not e-/ADU. Higher gain gives lower read noise for the IMX585 (not higher). This sensor does have a read noise of 12 when gain 0 is used, read noise of 4 for unity gain, and read noise of 1 for gain 450. Therefore, you have the lowest read noise when the highest gain is selected. Therefore, I use a maximum gain of 450 for very faint objects, to have the most sensitivity (e-/ADU near 0) with the least read noise. See specs: astronomy-imaging-camera.com/product/asi585mc/
Oh, I see what you're saying now. I explained it poorly, and have updated my previous answer a bit. A better explanation is that e-/ADU is how many electrons represent 1 ADU. If it is 0.5, then each intensity step (Analog Digital Unit) represents 0.5 electrons. When gain increases, fewer recorded electrons make the image brighter. When e-/ADU is below one, you aren't actually improving SNR because the signal is being artificially amplified. In the extreme case when e-/ADU is zero, then your camera will be saturated with a zero second exposure (i.e., you'll get a completely white image). Example: If e-/ADU is 0.5, then if two photons are captured and converted perfectly to electrons, the ADU would increase by 4. But the true signal (2) is unchanged. With that said you shouldn't use the read noise to guide your gain settings. A good rule of thumb is to use unity gain. Why? read noise isn't as important as shot noise and dynamic range. Let's say your e-/ADU is still 0.5 and like before you capture two photons. Higher gain is not magic: The signal is still 2. But the shot noise (not the read noise) has increased. Instead of being sqrt(2) if you had unity gain (1 e-/ADU), you have a shot noise of sqrt(4)=2.
@@deepskydetail The reason everyone uses "unity gain" is because everyone uses unity gain. This, in my opinion, is more of a copy behavior than an actual good reason. Because I work with a Dobsonian on an equatorial platform, I want to minimize the length of sub-frames to minimize tracking errors. That's why I've done some research into "deep sky lucky imaging," where they work with sub-frame lengths as short as 4 seconds. An explanation can be found here (in French): ruclips.net/video/0lKCB0q0jV0/видео.html The section about gain settings starts at 19:30. So, the gain is set slightly below the maximum value there to minimize read noise (in other words, not unity gain). This is exactly what I do as well; I set the gain to almost the maximum value and take many short exposures. I came to your page precisely because I want to know how short I can make my exposures without read noise becoming the dominant factor. But this information could I not extract with your application. Currently, I take a few thousand sub-exposures of 15 seconds each per object, but I want to reduce the sub-exposure to below 10 seconds. Your previous explanation about the conversion process is correct; higher gain settings above unity gain do not make the camera more sensitive, but they do reduce the read noise.
I'm trying to get an estimate from my subs but when I load them into Siril, I get separate numbers for red, green, and blue. Why color am I supposed to use? They're unbalanced so I get different estimates for each -- the estimates are 27hr, 5hr, and 8.5hr of exposures using red, green and blue mean values, respectively. Any ideas on the best way to handle this? Thanks!
Good question. Are you using RGB filters with a monochrome camera or a one shot color camera? If the former, I would use the RGB compositing tool, do a quick color balance and then use the RGB tab to find the DSO and sky background values. Then put that into the app and multiply the sub length by three. If you're using a one shot color, then use the RGB tab. Hopefully, I understood the question.
@@deepskydetail Thanks for the reply. I’m using a one shot color camera. I save them as fits files using SharpCap, then load them into Siril. By default it loads it as black and white but when I get statistics it breaks it down by R, G, B separately. If I debayer upon open it shows tabs for R, G, B, and RGB, but even with the RGB tab, when I get statistics on a selection, it splits it into R, G, B channels.
@@deepskydetail I do notice there’s a box on the top right corner of the statistics window that says “Per CFA channel” which is checked but I’m unable to uncheck it. Maybe that would convert them to a single number of I could uncheck it. Not sure why it won’t let me uncheck the box…
Okay I figured it out. I can load an image and NOT debayer it, then get stats on a selection and it will let me uncheck “Per CFA channel” and report a single value as in your video.
According to this then you will significantly shorten the total integration time by using short duration light frames, correct?? I calculate that if I use 36 seconds per light frame I can reach SNR>100 after 7.9 hours but with 180 second duration it would take 40 hours???? Really?
Thanks for the comment. What numbers are you putting into the app to get you such different results, and how are you getting at such different numbers? The app tries to estimate noise and target signal on an electron per second basis. I haven't seen that sort of discrepancy with what I have tested. For example in my light polluted area, to get an SNR of 30 on M78 with 300 second exposure frames, I would need about 15.5 hours of data (300s x 186 exposures = 15.5). When I use the **slider** to estimate the SNR with 60 second exposures, it tells me I still need 15.55 hours of exposure time (60s x 933 exposures). If I move the slider up to 5x, I still get 15.83 hours of exposure time (1500s x 83). If you could comment on the parameters you're using to get both results, I'd appreciate it. Thanks!
You will need 30753 exposures to get an SNR > 100.1 at 180 seconds (1537.65 hours)0.570814666845778 what am I doing wrong ? lights: 513 and 508 Bias: 501,8 Dark: 502,5 Zwo Asi 6200mm Pro Gain: 100 Exp: 180 sek 😢😢😢😢 PLS HELP 🤦♂
1/2 OK, let me see if I understand this: DSO + SkyGlow Signal: 513 SkyGlow Background: 508 Dark Signal: 502.5 Bias Signal: 501.8 Camera Gain (e-/ADU): 0.25 (at 100 regular gain, this is what I have for the ZWO 6200mm) Camera's Read Noise (Electrons): 1.5 (again, at 100 regular gain) Desired SNR: 100 I get that you will need 1600 hours. That might be correct for 100 SNR. Notice that your DSO + Skyglow signal is only 5 more than your Skyglow background signal. That means the part of the image you are using to calculate it is really faint. Or you have a lot of light pollution.
Do you mind sharing the telescope and FOV you used? I think Ha and OIII would be the go to filters for that target. The spaghetti nebula is pretty faint to begin with. If you are using a faint part of the nebula for your input, that might be the reason, especially if you aren't using filters at all. Maybe use a brighter part? Also, you're in bortle 2, so you might want to use longer exposures. Noise will hardly go up, but your signal should improve (try 300s or even 600s if your mount can handle it). It should definitely be possible in bortle 2 though! Nico Carver could do it in pretty light polluted skies (www.nebulaphotos.com/sharpless/sh2-240/ ). Try a light frame using an Ha filter and see what you get. Let me know what the results are :)
pro advice here: do shortest possible subs that are skylimited and do as many as possible, discard any that are not perfect(guiding errors ,worse seeing and such), it will improve you photos dramatically, you only want to go longer subs for extremelly faint objects where there is like no signal at all on single sub. What matters is total integration time and when you have subs that are skylimited you wont gain too much when doing longer subs, but you can loose a lot, it can be pretty difficult to get perfect guiding for longer time also you subs will be more blurry if seeing is not super stable. With shorter subs you can be very selective with what you use at the end discarding 30s sub is much easier than 10 min one
Here is the URL to the SNR planning app: deepskydetail.shinyapps.io/Calculate_SNR/
Interesting! Will look forward to using this. Thanks
Fantastic video! I can't believe how quickly you whipped up that app!
Thanks! This app wasn't too difficult because the math was already in the original spreadsheet. It was just a matter of turning it into simple code. Plus, the R packages that I use to make the application make coding everything pretty simple.
I have subscribed your channel faster than one can say "WOW man, this is superb!"
This should prove very useful to a newbie, who probably oversamples.
I have just recently "discovered" your channel. I'm very interested this and all that goes with it. Looking forward to the next one. Thanks m8. 😃 🇦🇺 🖖
Very interesting. Thank you for your work. I suspect, seeing most everyone uses some type of filter, the hours of integration/exposures will go up exponentially.
I hope this tool gets more developed over time. Such a great thing to have especially for beginners or even professionals. Also with the camera I have I don’t use bias frames but dark flats. Hopefully there will be an option for those in the future. Great work!!!
You should be able to just take some bias frames to get the calculations. I only use dark flats too, but taking a few bias frames is easy and the numbers should be valid for a while.
@@deepskydetail Good to know! thanks for getting back😃
James, definitely after watching your video looking to use your application, thanks for reviving Craig. Good job!
Superb! I will try this. Thanks for your efforts.
Great video and awesome web app. Would be great to make this into a iOS app.
Really cool app! Thank you for creating it. I will try it out myself aswell!
Thank you!
Hats off to you, sir! I've made a small donation to help the cause along, and to encourage others who might want to contribute to the astrophotography/astronomy community. Many thanks! PS - I also subscribed to your channel.
Thank you very much! :)
Great video. Thanks so much for your work and contributions to the community.
Thank you! Glad to do it!
Thanks a lot! Amazing video and amazing app!!! Really cool stuff!
Thanks for the informative video. How would you gather the mean ADU value in PixInsight for a selected area? I know how to collect the mean value for the entire frame, but I need help finding how to do it for a designated area. Dynamic Crop, maybe?
Unfortunately, I'm not sure how to get the statistics in Pixinsight. Sorry about that!
Bravo, you rock!
Thank you bro for your hardwork
You're welcome :)
Great work! Thanks..
Thank you too!
awesome work!
Thanks :)
Well done! One question: how is quantum efficiency of a camera taken into account? The shot noise originates from Poisson distribution of photons. Are you assuming that the quantum efficiency is something like 85% and then taking that into account?
I don't think quantum efficiency is needed to get the signal. Why? Because the total signal is being calculated from the light frame itself. If QE goes up for a camera, then you'll see a brighter image in your light frame (i.e., the ADU will be higher). In other words, the app will adjust the signal based on that based on the two samples you took from the light frame.
Nice tool and great designed Video. Well done. Maybe you want to check your exposures. The graph of your camera showed 1.9e readnoise, not 0.9e.
Thanks, but don't I have 2 for my read noise in the video? 0.9 is the gain value in electrons per ADU.
@@deepskydetail aaaaah, got it. 😅
Thank you!
You're welcome! And thanks for watching!
Thanks, this sounds like a great tool. Do you know how I would relate camera gain (I assume you are using an Astro-camera) to ISO on a one-shot-color camera?
I'm going yo have a video on it. It will probably come out today or tomorrow.
Excellent work. One thing I can't quite get my head around - no matter what numbers I plug in based on my subs and biases, the app always suggests something in the order of hundreds or even thousands of hours needed, even to get a SNR or 75 or so. I haven't seen anything with my frames to suggest they are overly noisy - what might I be missing? i am also using Siril stats to get the values. I am happy to take 10 or even 20 hours worth over a few nights, but not hundreds :). Thanks again, this will be very valuable once I get it figured out!
Thanks, do you mind sharing the numbers you're putting into the app?
@@deepskydetail SO sorry I missed your reply here - I actually did get it figured out (though I don't remember what the solution was at the moment). Much appreciated.
Fantastic work! What do you mean by bias signal. I do not take bias frames with a cmos camera. There are darks, lights, flats and flat darks. Please say more about getting the value needed to enter into your bias input.
A bias frame is same iso as your flats/Darks, shortest possible exposure. It contains basically only read noise. It's an alternative to a flat dark to calibrate noise out of your flats.
Yes, I would take a very short exposure with the same ISO as your dark and light to get the bias signal.
@@deepskydetail Thanks. I have been testing your very nicely built App, some parameters like read noise have little to no impact on the curves.? Neither does the bias signal? I know you based this on a spreadsheet set of equations, what are your thought on this?
@@pcboreland1 I've gotten similar questions on this. Basically, read noise should affect the curve, but it might not have a huge effect (especially if you have a lot of light pollution).
The bias signal should have an effect, but the dark signal shouldn't. Basically, the skyglow noise estimate takes into account the dark signal (skyglow - dark) , and the total dark signal also takes into account the dark signal (i.e., dark - bias). So... in the end those two things cancel each other out in the math, so the dark signal is kind of redundant. But, they might eventually be relevant if I ever include more information in the app.
The biggest impact is probably going to come from the target signal and the skyglow.
I was wondering if any of this applies to OSC DSLR cameras
This is super cool! I use a Nikon D5300 to image, where can I find those camera stats for it, since I don’t use an astro cam that gives them to you on its website?
You should check out this video I made for DSLRs: ruclips.net/video/oBpAkHim7dY/видео.html
It takes a bit of calculation to get the RN and gain for DSLRs but it is doable.
@@deepskydetail Thank you!
Great work, awesome video. Thank you very much!
Thanks!
Than you. Very Interesting tool, I am using it. We can see from the graph that it can go beyond 100.
So it means that at a point adding frames is no longer increasing the SNR. What to think abount people adding 100H ?
Thanks! I'm not sure exactly what the question is. What do you mean by "adding 100H"? Thanks :)
No, gathering more exposure time will always increase the SNR
@@deepskydetail Sorry for the typo, I meant 100 subs. Isn't 100 SNR ratio a limit ? Are we increasing the quality by going over SNR 100 ?
@@galactus012345 That's a good question, and I should have been clearer in the video. You can theoretically have infinite SNR if you have infinite integration time. It's a ratio, not a percentage, so it doesn't stop at 100.
Thank you for Videos and Apps! Is most appreciated. What makes me very sad ist to see the exposure time i need to get an SNR of 50 under Bortle 5 skies( Mag 20). 144 hours when i use 60s frames. But the exposure time depends a lot on how i frame when i get the statistics. I used a small galaxy NGC3184, and when i only framed a part of the galaxy i got an exposure time of 27 hours, when i framed in more than the galaxy, i got an exposure time of 144 hours! So what do you think regarding how to obtain statistics from light frames? I used Siril by the way.
Thanks! Wrt your questions, you really need to get a good sample of the sky background for things to give you an accurate measurement. Take the statistics of the sky background as close to the galaxy as you can, but making sure you don't include the galaxy in it. Also, the part of the galaxy you take a sample of also matters. The resulting SNR in the App is only for the sample that you took. So, if you took a sample from the outer arms of the galaxy, you'll need more imaging time to get the same SNR compared to the brighter core. Hope that makes sense.
Also make sure you're using an "average" sub frame. One that isn't your best, but not your worst either. These things are important! I hope that answered the question.
Thank's for your reply! . I have tried to do as you described, and i find that the results make more sense. 23.45 hours under Bortle 5 skies for SNR 50 makes more sense for me. With the 10 hours of integration time i've got, i have to use Binning to achieve a fairly acceptable result. Thank you again! Question: Is it possible to measure the SNR direct in a stacked Fits file? Some time i wish i could do that to be able to compare stacks with different exposure times(60s vs 180s). Total time would be both the same.
@@HaakonRasmussen-r8o So, I think different stacking software will attempt to measure SNR of a stacked image. But, I'm not sure how well they work, and different software produce different results. I've measured the SNR in a few different videos myself (see my Bortle analysis, Ha filter analysis or SNR and stacking analysis videos). I'm thinking of making an app that can do that with help from Siril by using registered sub frames. But, as far as I know, you need to measure the SNR as you stack.
Thank you,@@deepskydetail !
Very cool. Is there a way to skip bias frames? I have a DWARFLAB dwarf2 smart telescope and normally most users of this scope do not use bias of flats
I think you'd need bias for accurate results. But they're really easy to take. The dwarf2 might have a function to let you do that (but I'm not sure). If not, I'd email the developers to get that functionality :)
Hello, @deepskydetail. I've been studying this stuff recently from de articles of Craig Stark, and been using the spreadsheet also (same one that appears in 1:40 of your video). I have a doubt, hope you can help me to see where I'm wrong. If I enter to the spreadsheet the very same data you show in your excel (and that is loaded automatically when opening the web app), the target SNR for a single image is 0,794 and then, to get to an SNR of 5, I'd need 40 images (this last calculation is also made by the spreadsheet itself in the Stacking Sheet). The problem is that this is not the value that the web app gives (it says I only need 33 subs). Where is the difference coming from?
Thanks in advance. By the way, the web app is really amazing and helpful.
Thanks for looking into this!! Just fixed it based on your comment! I double checked and it seems like there was mistake in the code when calculating signal in electrons. It was throwing off the SNR of a single sub exposure a bit, which of course has compounding effects!
@@deepskydetail Hey! Thanks for considering my comment. And as I said, it's a really helpful tool. Thanks for your work! (BTW, now the numbers match)
Hi! Changing the camera read noise value or changing the dark signal value appears to have no impact on the SNR curve. Is this expected behavior? In fact I can set both values to zero!
Hi, I just checked, and
(1) changing the read noise value does change the SNR curve for me. Try increasing the read noise to something like 500 to see if it changes. It did for me.
(2) After double checking, it is normal that changing the dark signal value does not change the curve. Basically, the skyglow noise estimate takes into account the dark signal (skyglow - dark) , and the total dark signal also takes into account the dark signal (i.e., dark - bias). So... in the end those two things cancel each other out in the math, so the dark signal is kind of redundant. But, they might eventually be relevant if I ever include more information in the app.
I'm totally confused. I dont know what ratio is "good" so how am I supposed to pick one?
You can check out this video I made on it :)
ruclips.net/video/eRKk3lNyXO8/видео.html
I must be doing something wrong. If I'm using pixinsight to grab the adu.. I make a preview of the faint and dark areas and then select statistics tool. Like you've got there in Siril.
Problem is the numbers seem way off. I have the 16 bit selection of the drop down selected since that what my 2600mm is. I tried normalized and 14 bit since the numbers were closer to yours. Neither worked.
The outputs I got were as follows Input 1: 3078 2: 1909 3: 12 4: 10.9 The camera stuff: 1: .21 2: 1.5 Desired SNR 90 Exposure duration 90
Halp!
It looks like to me the ADU values for the light frame might be off. The values seem to be off (too big of a difference maybe?). What is the target you are shooting and what is the light pollution in your area?
@@deepskydetail m101 and bottle 4
Bortle*
I'm having a bit of trouble figuring out which numbers go where. Is this correct?
DSO + Skyglow Signal: 3078
Skyglow background: 1909
Dark Signal: 12
Bias Signal: 10.9
Camera Gain: 21
Camera's Read Noise: 1.5
If I put those numbers in, I get 9.7 hours to get an SNR of 90. But I think the dark and bias signal values may be off. Are those numbers in ADU? I would expect those to be a bit larger. And the camera gain should be lower (probably around 1.0 or so if you are at unity gain).
Also, the difference between the DSO+Skyglow and Skyglow numbers seem a bit too large. I'd double check those and make sure you're getting an output in ADU and not some other unit.
How to deal with Canon DSLR? The dark frame is scaled in the camera to the same mean value as bias...
Clarifying question: does this mean that when you take a bias frame, it has the same ADU as a dark frame?
How do I calculate the needed exposures per filter for mono cameras? If I use L-RGB filters, I have to divide the number simply by 4?
I think most of your data will be from the L filter, so I would use that and be a bit conservative in the estimate. If you do want to include the RGB, I think you'd want to use a weighted mean depending on the % of frames that will be L vs RGB.
Siril's statistics show 4digit mean number but in Pixinsight, it shows like 00.000 Does anybody know how to change this number to 4 digit number like Siril to apply to this app?
On the stats window, there is a checkbox that says "Normalized Real [0, 1]". Is that checked or unchecked?
I tried the link to calculate the SNR, but the "Camera Gain (e-/ADU)" does not make any sense to me. e-/ADU stands for "electrons per Analog-to-Digital Unit," which implies more sensitivity for lower values. The lowest read noise for my sensor is at gain 450, where the e-/ADU value is near 0 according to the specifications of my camera. But when I use 0 in your app, then I need infinite exposures according to your app. This does not make any sense to me because that does not match with my observations.
1) Camera gain is not read noise. e-/ADU basically means that each time a photon hits your sensor, how many electrons will it send to the processor. If it is 0, then you won't record any photons (as measured electrons). Higher numbers mean more sensitivity.
2) People generally use unity gain, or close to it. It just means that if a photon hits your sensor, it will record it as one electron. Higher gain values generally mean more shot noise*,̵ ̵w̵h̵i̵c̵h̵ ̵i̵s̵ ̵a̵ ̵s̵e̵p̵a̵r̵a̵t̵e̵ ̵m̵e̵t̵r̵i̵c̵ ̵y̵o̵u̵ ̵p̵u̵t̵ ̵i̵n̵ ̵a̵s̵ ̵"̵C̵a̵m̵e̵r̵a̵'̵s̵ ̵R̵e̵a̵d̵ ̵N̵o̵i̵s̵e̵ ̵(̵E̵l̵e̵c̵t̵r̵o̵n̵s̵)̵"̵.̵
@@deepskydetail You misunderstood me. I know that camera gain is not read noise, but I have a camera with the IMX585 sensor, and according to the specs, when a high gain of 450 is selected, then the e-/ADU is near 0. According to your explanation and app, then it will NOT record any photons; however, this setting is MAX gain, therefore it records the maximum number of photons possible. When I select a gain of 0, then the e-/ADU is about 12. According to your explanation, this is the most sensitive setting; however, this is the LEAST sensitive setting. Your explanation only makes sense if you talk about ADU/e- and not e-/ADU.
Higher gain gives lower read noise for the IMX585 (not higher). This sensor does have a read noise of 12 when gain 0 is used, read noise of 4 for unity gain, and read noise of 1 for gain 450. Therefore, you have the lowest read noise when the highest gain is selected.
Therefore, I use a maximum gain of 450 for very faint objects, to have the most sensitivity (e-/ADU near 0) with the least read noise.
See specs: astronomy-imaging-camera.com/product/asi585mc/
@@deepskydetail I try to reply, but RUclips's bot removes my reply instantly. I will try to reply in several posts.
Oh, I see what you're saying now. I explained it poorly, and have updated my previous answer a bit. A better explanation is that e-/ADU is how many electrons represent 1 ADU. If it is 0.5, then each intensity step (Analog Digital Unit) represents 0.5 electrons. When gain increases, fewer recorded electrons make the image brighter. When e-/ADU is below one, you aren't actually improving SNR because the signal is being artificially amplified. In the extreme case when e-/ADU is zero, then your camera will be saturated with a zero second exposure (i.e., you'll get a completely white image).
Example: If e-/ADU is 0.5, then if two photons are captured and converted perfectly to electrons, the ADU would increase by 4. But the true signal (2) is unchanged.
With that said you shouldn't use the read noise to guide your gain settings. A good rule of thumb is to use unity gain. Why? read noise isn't as important as shot noise and dynamic range. Let's say your e-/ADU is still 0.5 and like before you capture two photons. Higher gain is not magic: The signal is still 2. But the shot noise (not the read noise) has increased. Instead of being sqrt(2) if you had unity gain (1 e-/ADU), you have a shot noise of sqrt(4)=2.
@@deepskydetail
The reason everyone uses "unity gain" is because everyone uses unity gain.
This, in my opinion, is more of a copy behavior than an actual good reason.
Because I work with a Dobsonian on an equatorial platform, I want to minimize the length of sub-frames to minimize tracking errors.
That's why I've done some research into "deep sky lucky imaging," where they work with sub-frame lengths as short as 4 seconds.
An explanation can be found here (in French): ruclips.net/video/0lKCB0q0jV0/видео.html
The section about gain settings starts at 19:30. So, the gain is set slightly below the maximum value there to minimize read noise (in other words, not unity gain).
This is exactly what I do as well; I set the gain to almost the maximum value and take many short exposures.
I came to your page precisely because I want to know how short I can make my exposures without read noise becoming the dominant factor.
But this information could I not extract with your application.
Currently, I take a few thousand sub-exposures of 15 seconds each per object, but I want to reduce the sub-exposure to below 10 seconds.
Your previous explanation about the conversion process is correct; higher gain settings above unity gain do not make the camera more sensitive, but they do reduce the read noise.
I'm trying to get an estimate from my subs but when I load them into Siril, I get separate numbers for red, green, and blue. Why color am I supposed to use? They're unbalanced so I get different estimates for each -- the estimates are 27hr, 5hr, and 8.5hr of exposures using red, green and blue mean values, respectively. Any ideas on the best way to handle this? Thanks!
Good question. Are you using RGB filters with a monochrome camera or a one shot color camera? If the former, I would use the RGB compositing tool, do a quick color balance and then use the RGB tab to find the DSO and sky background values. Then put that into the app and multiply the sub length by three. If you're using a one shot color, then use the RGB tab. Hopefully, I understood the question.
@@deepskydetail Thanks for the reply. I’m using a one shot color camera. I save them as fits files using SharpCap, then load them into Siril. By default it loads it as black and white but when I get statistics it breaks it down by R, G, B separately. If I debayer upon open it shows tabs for R, G, B, and RGB, but even with the RGB tab, when I get statistics on a selection, it splits it into R, G, B channels.
@@deepskydetail I do notice there’s a box on the top right corner of the statistics window that says “Per CFA channel” which is checked but I’m unable to uncheck it. Maybe that would convert them to a single number of I could uncheck it. Not sure why it won’t let me uncheck the box…
Okay I figured it out. I can load an image and NOT debayer it, then get stats on a selection and it will let me uncheck “Per CFA channel” and report a single value as in your video.
@@andrewoler1 Glad you figured it out! :)
According to this then you will significantly shorten the total integration time by using short duration light frames, correct?? I calculate that if I use 36 seconds per light frame I can reach SNR>100 after 7.9 hours but with 180 second duration it would take 40 hours???? Really?
Thanks for the comment. What numbers are you putting into the app to get you such different results, and how are you getting at such different numbers? The app tries to estimate noise and target signal on an electron per second basis. I haven't seen that sort of discrepancy with what I have tested.
For example in my light polluted area, to get an SNR of 30 on M78 with 300 second exposure frames, I would need about 15.5 hours of data (300s x 186 exposures = 15.5). When I use the **slider** to estimate the SNR with 60 second exposures, it tells me I still need 15.55 hours of exposure time (60s x 933 exposures). If I move the slider up to 5x, I still get 15.83 hours of exposure time (1500s x 83).
If you could comment on the parameters you're using to get both results, I'd appreciate it. Thanks!
You will need 30753 exposures to get an SNR > 100.1 at 180 seconds (1537.65 hours)0.570814666845778
what am I doing wrong ?
lights: 513 and 508
Bias: 501,8
Dark: 502,5
Zwo Asi 6200mm Pro
Gain: 100
Exp: 180 sek
😢😢😢😢 PLS HELP 🤦♂
1/2
OK, let me see if I understand this:
DSO + SkyGlow Signal: 513
SkyGlow Background: 508
Dark Signal: 502.5
Bias Signal: 501.8
Camera Gain (e-/ADU): 0.25 (at 100 regular gain, this is what I have for the ZWO 6200mm)
Camera's Read Noise (Electrons): 1.5 (again, at 100 regular gain)
Desired SNR: 100
I get that you will need 1600 hours. That might be correct for 100 SNR. Notice that your DSO + Skyglow signal is only 5 more than your Skyglow background signal.
That means the part of the image you are using to calculate it is really faint. Or you have a lot of light pollution.
2/2
Which target are you trying to image, and what part of the target did you select? Also, what is your light pollution like?
@@deepskydetail Hi ! That's right. The photo we use for the reading was taken if Bortle 2 - high in the mountains!
@@deepskydetail The Spaghetti Nebula, Bortle 2.
I have a mono camera, what filter is the best to take a photo with?
Thank you !
Do you mind sharing the telescope and FOV you used? I think Ha and OIII would be the go to filters for that target.
The spaghetti nebula is pretty faint to begin with. If you are using a faint part of the nebula for your input, that might be the reason, especially if you aren't using filters at all. Maybe use a brighter part? Also, you're in bortle 2, so you might want to use longer exposures. Noise will hardly go up, but your signal should improve (try 300s or even 600s if your mount can handle it).
It should definitely be possible in bortle 2 though! Nico Carver could do it in pretty light polluted skies (www.nebulaphotos.com/sharpless/sh2-240/ ). Try a light frame using an Ha filter and see what you get. Let me know what the results are :)
who takes 15s subs ...
pro advice here: do shortest possible subs that are skylimited and do as many as possible, discard any that are not perfect(guiding errors ,worse seeing and such), it will improve you photos dramatically, you only want to go longer subs for extremelly faint objects where there is like no signal at all on single sub. What matters is total integration time and when you have subs that are skylimited you wont gain too much when doing longer subs, but you can loose a lot, it can be pretty difficult to get perfect guiding for longer time also you subs will be more blurry if seeing is not super stable. With shorter subs you can be very selective with what you use at the end discarding 30s sub is much easier than 10 min one
SharpCap Smart Histogram Brain