The Starlight Camera is the SX-825. The manual for the camera is here, which is where I got most of the specs from: www.sxccd.com/wp-content/files/Trius-PRO-825-handbook-1.pdf. I should have put that in the video. Sorry for the oversight! None of the products in this video are sponsored by any company. I do not receive products to review from manufacturers in order to keep these types of videos as honest as possible. You can support me at buymeacoffee.com/deepskydetail to help keep the channel independent!! :)
Really enjoying your videos for some time now. It is great when you are quick to disclose when comparisons aren't fair, or the analysis flawed. Your conclusions are carefully weighted. Best of luck with your shoulder recovery.
"My arm is in a sling because some guy at the bar said 'Zwo cams are best', and I had to teach the brute a lesson. Sure, my arm got broken, but you should see HIM."
I use colour cameras. I had the ASI294MC and traded up to the 2600MC. For photographing galaxies, my exposure time went right down, and the quality went right up. I'm not really a nebula photographer, so my life is not dominated by H-alpha. I also see the bigger chip size as a significant benefit of CMOS over CCD.
Hello! There has been some heated discussions related to QE and sensitivity graphs. If you look at the newest small chip, the IMX585 it shows a much greater sensitivity to Ha and near IR than the 533 or 2600. Granted, it's a small sensor, but it may be capturing more than the others, while being smaller.
Get well soon, and please be very careful to avoid frozen shoulder. My orthopedic surgeon failed to warn me about this after I had shoulder surgery, and I developed it myself. It turned into nearly 18 months of hell with lots of pain and little sleep.
I'll take your advice as directed: with a grain of salt. Yes, binning can help, but typically only if your calculated arc"/pixel falls below about 1/1. I do find that on both my 72mm f4.9 and 102mm f7 units, a dither and drizzle (2x) provides a dense, nicely resolved image, albeit with the occasional artifact. As for CCD vs CMOS, I've only compared it with non-astro cameras, but found the CCD sensor to offer a bit more color contrast. As a result, my ancient Canon 8 M pixel S95, has become my favorite street camera, over a larger 1" CMOS unit. Though that comment is outside the scope of your video, I think it may point to agreement with your findings....
That's pretty interesting about your Canon camera. And of course, binning isn't always the best plan of action, depending on the pixel scale. Thanks for the comment!
The filter is extremely important here. 3nm is very restrictive, but not necessarily "better". The slightest shift in its bandpass can have drastic consequences and most popular (ie not costing more than the instrument) brands suffer from significant variability.
100% true! Great point, and Cuiv made a similar comment :) It's very interesting though that the difference in SNR corresponds pretty well to the difference we'd expect based on pixel size!
Hi and thanks for this video! Well done! I would like to add something to Your considerations and I hope is useful to solve the "mystery". As You stated there is a different quantum efficiency for a given wavelenght and we can see that in the Q.E. curve of the sensors. There is too much hype, on my opinion, reguard to the tech specs of the cameras, specially on the readout noise side. You can only compare two sensors that are sampling the same angle of the sky per pixel since the Photometry is ruling and not the system. A fixed flux of photons per second per angle considered exist and You can compare different systems only on that same angle of sampling. The old CCD's used to have a much more precise electronics and it was a single point of lecture of the datas. Today's CMOS have a DA converter for each pixel and they are not engineered for "precise" measurement but for fast reading. When You look at the many graphs published for the CMOS, they enrich the low readout noise of the CMOS compared to the CCD's but it is really true? CCD's used to have a fixed gain, calculated for to optimize dynamics. Nowdays changeable gains on CMOS have the effect to "change the readout noise" and that is very low compared to the CCD's. The problem in changing the gain is that You count less and less elctrons in a bigger and bigger block. Since the full well is not changeable and it depends on the intrinsic physics of the silica, the bigger the block, the less the precision of the lecture and the smaller is the usable full well. Consider a CCD with a 5e readout noise on a 50k full well, it is 0.01% of "noise" better to say uncertanty while 3e on a 10k full well is 0.03% of noise, three times more! Another thing to consider is the declared full well... In reality it only depends of the area of the silica used for the photoelectric effect. a 3.7x3.7 microns area of silica, even if it is perfect can't retain more than 12/16ke, this is physics not adverticement... Mabye tha CMOS has a memory and it reads the pixel when is closing to be full and retain the information for the final sum (buffering) but it is not true full well. In the end we are using sensors which are not even colose engineered for precise measurement like the CCDs but for different industrial applications. We are using these sensors becouse they are cheap compared to CCDs and also 'cos they are no more producing CCDs. in the end is not the absolute number You have to look but the percentage and the precision of the readed data. The uncertanty is "noise" in digital since we are no more taking an image, we are making a measure instead and how precise is it is the SNR which translate in contrast when we chose to represent these datas as an image. Remember tha they are just numbers in the digital world or "datels" and You can play them as a "sound" if You will, they will sound like an horrible noise to Your ears but they still are EXACTLY Your datas!
Thanks for the comment! I think there is some great information here. I agree that the angle and sampling of the sky are super important here, and it's a major problem in comparing the two images. I'm in the process of doing more rigorous tests, and all the conclusions in this video are very tentative! About the CMOS vs. CCD, this is really good info. Would you have some sources for me to look at and study this more? I've never thinking about the read noise to well depth ratio before, and it make sense on an intuitive level. Let's see if I'm understanding this: If you expose until the maximum without overexposing, you're read noise as a percentage of the actual signal is less than that of a CMOS camera? That's because in order to get the low read noise value in a CMOS camera, you're actually losing dynamic range, decreasing the full well depth? Anecdotally, one thing I noticed about my Starlight Xpress 825 was that it does seem to be less noisy than my 294MM, even though the read noise is greater in the CCD. I just felt that the images were smoother somehow, but I might be wrong, and it might be due to pixel size. Much to learn, and if you have some sites/books/articles to share, please do! You can always email me at deepskydetail at gmail.com!
@@deepskydetail Thanks alot! It was my only purpose to stimulate curiosity about that and I hope it useful... On the first I would like to point out that we are no more "taking pictures" but "making measurements" whit digital cameras. Our intent is to measure, as best as possible, the intensity of the energy (emitted in the form of photons) of a subject. Since our camera is an array of "micro-sensors" that we call pixels even if a pixel is something different in digital world. Anyway we can only make a measure of the total energy that falls in to a single pixel, per every pixel in a given time. We chose to represent these different energies in form of light reproduced from a monitor i.e. and this is quite obvious since we are capturing an emitted "light". Anyway this point out that, in a perfect world, we can only get that total energy and nothing more, no matter the system; in this case our best result is to get that exact energy. When we take such "measurements" we get some kind of errors in the process that alters the "real" data we want to capture. What we call "noise" in digital is exactly that, How much datas is certain of the source we intend to measure and what is not? That is the SNR which is more correct to call: certain Vs. uncertain. Readout noise is part of the electronic noise and is something related to the system that we can't measure or take out from the equation since is not fixed. When we say 5e of readnoise it can be +5 to -5 but can even be 0! The fact is that is different each time and we havo no way to measure it. Other form of shot noise we can deal with in some forms, i.e. dark current noise. We can take dark frames and then subtract these from the image to "correct" the values from the measurements but we can't remove the uncertanty of the readnoise from the dark frames too so another small error is contributing the final measurement (that's why the professionals use cryogenic cooling to cameras and they not take flats at all). Anyway the main difference from CCD's and CMOS is they're purpose, the first ones (CCD's where been invented to be memories for the computer at the beginning) where made for taking precise measurements and they involve complex electronics and clocking systems to work. For this reason they needs alot of energy to work. CMOS on the other end where born for industrial uses and they didn't need high degree of precision. At first they where used as counters or triggers for industrial machines. imagine a conveyor belt with a lot of cans of Campbell's soup, a CMOS underneth the belt could see the darkening of the lights for each can passing and count them. These kind of applications where the ones intended for CMOS use. Today we are too fiew users for the producing companies to think about making a CMOS sensor as precise as a CCD for measurements. We could be 1000000 people but they make 8 billions CMOS per year just for the mobile phone industry! Anyway we are getting smarter since we don't really take pictures anymore but we make statistics instead! That's why we are getting better and better even with sensors that are not made for our intent. It is important to underestand that the readnoise is an error we can't remove but we can bring to a certain value that we are confortable to accept and this is true for all the total noise that we can measure or deal with in some ways. As a fixed number in a range it is indeed a percentage and, higher full wells means that this is in a smaller and smaller percentage, the uncertanty on very low full wells becomes higher and higher as a percentage of the whole signal. We are adapting ourselves to smaller pixel sizes and this becomes even convenient for us 'cos we need shorter focal lenght to achieve the same spatial resolution per pixel respect to CCD's but at the cost of precision of the data and full well. We don't have the chance to know what any CMOS converter do behind the lines since this is an industrial secret, i.e. we nowdays take flats just for correcting the "vignetting" of the system but, whit CCD's we are also correcting variations of sensitivity or discrepancies from one pixel to another! This becouse the CMOS still have a map inside the electronics that correct this value but: how much is precise that correction? We don't know and we cannot change that. All these kind of trickery was not present at all in the CCD's. In the end that's mainly why Your CCD's frames seems more "smooth" to You, becouse they are! The data has an higher degree of precision. Sorry, I wrote too much... Anyway these are two great sources to read if You will: CMOS/CCD Sensors and Camera Systems, Second Edition. Author(s): Gerald C. Holst, Terrence S. Lomheim Photon Transfer Author(s): James R. Janesick
@@deepskydetail Sorry but I would like to make another, mabye useful, consideration... There is another difference between CCDs and nowdays CMOS and is the quantization noise. The CCD has a true 16bit converter while all actual CMOS are double reading 14bit that means they read the data twice ( and I don't know if double readings is also doubling the readout noise...) with a different gain converter and interpolate the results to "obtain" a 16bit approximation. Try to shot a good dynamic subject with large light differences with the same scope, sampling angle and same lenght single exposure that is quite long i.e. 300secs (And You will probably going to saturate the CMOS not becouse is much sensitive but 'cos it has a smalle full well) with CCD and CMOS and than look at the histogram... The CMOS one will be much "smaller" and "spiky" than the CCD one. Try then to increase the non linearity of the representation on fixed steps on both images and You will see that, in the CCD, You will retain more and more "tones" and capacity of distinguish more light differences while the CMOS will "appear" to be in a very small area with less tones to show. This is another effect of the precision of lecture of the datas...
Thanks for the info and sources! I now really want to test those ideas, like comparing the histograms. Sounds like some ideas for new videos ;) A also checked your astrobin images, and was blown away! Great images!!
That are Somme very interesting points! Actually, I’m also a bit confused by the IMX571 cameras. I’m comparing it to my old IMX183 camera. Which has tiny pixels. BUT! If I compare both on my celestron RASA 8 at f2.0 and SW 130PDS f5.0 I’m not sure what ist better. On nebula targets the 183 seems to be not very good. On the other hand, the 571 is perfect for broad band targets like dark nebula. But I do not have the deep knowledge to compare these cameras and telescopes like you. So, keep going!!
Thanks for the comment! There are so many things to try to consider with these cameras, I feel I could do a whole series on them because there is so much that I just don't know. If you do want to compare, feel free sending me an email!
Hi Mark, just a question about bigger pixels ... You also have an ASI294mm Pro with even smaller pixels but a good QE!? Please comment about the 294MM which i also have and love!
I hope you get better soon so hope your clouds aren't to clear until you're fit again lol, I'm guessing using OSC cameras the sensitivity is much reduced as my other half has an Atik 314 + with a pixel size of 6.45, in its day it was a good camera but she now uses an Altair 269C with a pixel size of 3.3 & it's just better. The same for myself as I have an old QHY8 OSC with a pixel size of 7.8 but is no where as sensitive as the Altair 26C, I tried a 12nm dual band filter in front of it & it couldn't see anything to get focus so it's consigned to collecting dust right now.
Thanks! Great comment! I agree, you have to look at a lot of things. A lot of the older CCD cameras are quite a bit worse than the 2600/6200mm when it comes to quantum efficiency. I don't doubt that they can't see anything with filters! Some of the Starlight cameras have quite a bit better qe (like my little SX-825 beating the 2600MM in qe or Ha). A bit bigger pixel size would help with some of the CMOS cameras imo. But, of course, with the CMOS cameras, you can still bin. And of course, the field of view due to larger sensor size is also really important, and so you don't have to do mosaics a lot anymore!
If you look at scientific grade cameras most of them have substantially larger pixels than the modern consumer-grade cmos chip. I believe its for this reason. They're less worried about resolution and more about SNR.
I think that makes sense, to me. Also, a lot of professional telescopes have quite long focal lengths. Bigger pixels also help to match the resolution. I think you're 100% right that pixel size is important to getting good SNR at long focal ratios and large aperture! It reminds me of an astro imaging channel live where they talked about why you'd want bigger aperture scopes.
The real pixel size of the mono pixels without Bayer is half the 5.9 um. Sony would need to develop another chip with bigger pixels. Would be a significant evolution of the current fullframe mono cameras and depends on the chip producer, but astro is for them not very interesting.
How can one talk about SNR without mentioning 1) exact light pollution numbers, 2) Moon phase and distance, 3) exposure length, to determine contribution of read noise?
Great comment! Here's some information to help clarify things. Most of the unknowns you mentioned are in the viewers' favor though, so I don't think those factors explain the difference in SNR measured. 1) Exact light pollution is an unknown for the comparison, but the Ha filters should get rid of most of this effect. The viewer's Ha filter is better than mine (edit: maybe?? Cuiv brought up some good points). If anything this should increase SNR for the viewer's image. Additionally, the SNR calculator app tries to measure light pollution and subtract that out of the calculation. 2) Moon phases: mine was taken on October 10, 2019, 90% illumination. The viewer's was taken around July 4, 2024, almost a new moon (2.88% illumination). Another advantage for the viewer's image. 3) 8:31 I put in that both subs were 300 seconds. The SNR calculator takes into account read noise. The fact that the CCD's SNR measurement is about 1.7 times more than the CMOS's, and the CCD's pixels are about 1.5 times bigger (close to the theoretically correct number when accounting for scope differences) is either a really weird coincidence, or pixel size is a really good explanation for the increase we observe.
@@deepskydetail It seems like the DSO shot noise is the dominant part of the overall noise in your measurement/calculation, because 1.7 times larger pixels will indeed produce 1.7 times higher SNR, everything else being equal (1.7 times larger pixel -> 1.7^2 times more signal -> sqrt(1.7^2) = 1.7 times larger SNR.) This tells me that, for your measurement of noise, you are choosing a relatively bright part of the nebula where DSO shot noise dominates. To me personally, those parts of the nebula have never been very interesting because they are bright and will look just fine with reasonably long integration time. It's the dimmest parts, where light pollution shot noise and read noise dominate, that's where SNR measurements/calculations are most interesting. Case in point: Goofi's Imaging Challenge at Cloudy Nights is the Squid nebula for this month. My first session was 4 hours at f/7, 3.76 micron pixels, 3 nm Oiii filter, Bortle 5.5, 240s subs. SNR of the brightest part of the Squid was 2 after 4 hours. Now that will require some serious integration time :).
You're right. I did choose a brighter part of the nebula because it was easier to make sure I got the exact same area. The fainter parts are definitely where it's at! It looks like I'll need to do a follow up video eventually!! I wonder whether the increase in signal can actually offset the read noise in those areas.
These are a couple of several very important variables to control in a valid controlled experiment. I’ll also include sky clarity (which can vary significantly over time) and target altitude. I know this may not be practical, but really there are only two ways to do this, either side by side imaging rigs, which I’ve seen done, or simulate the results (or differences) with a robust model. I know, it’s complicated… which is why it’s rarely dine robustly.
Get well soon mate! So the results are not super surprising to me since as you mention towards the end, you can bin, trading resolution for SNR. A 2x2 binned pixel on the IMX571 sensor still likely has much less read noise than that of a pixel on the SX. But I honestly don't think that's it, nor is it the QE (although that certainly contributes) - there is a decent likelihood that it is simply the filter not performing up to spec, especially if it's from *brand name redacted whose very narrowband filters tend to be hit or miss without the end user being able to tell*. It's also possible that combining low QE + 3nm bandpass + potentially defective filter + not so fast optics used + Bortle zone likely better than mine, this sub exposure simply doesn't swamp the read noise (although I would need to actually compute stuff to know for sure, the probability is low). Super cool potential issue to bring to light though!
I thought about it since your videos about filters popped up. However, I think he is up to something. I always thought why my oxygen seem to be always brighter than Ha. This sensitivity at frequencies graph completely explains it. I thought it's my filter's fault L-Ultimate, but actually it may be the camera 2600MC... although, I would still like to have my L-Ultimate tested😊
Thanks, Cuiv! I agree, those are all possibilities, and I cannot rule them out (I don't actually know the brand that the viewer used, so it could be a defective filter like you mentioned). But I keep coming back to the fact that the CCD's SNR was about 1.5-2.0x better than the CMOS's, which is right in the range we'd expect based on the difference in pixel size. Plus, I've received similar feedback from more than one viewer about the camera. It's a very parsimonious answer, but of course it might not be correct!!
@@deepskydetail Good day! I wasn't sure in which thread to reply, but this seems the most appropriate one. First of all, I really wish you the best with your shoulder! I have a lot of thoughts on all this. Most of the issues were already brought up, but the filter is also in my opinion the most relevant one. Let's just say things haven't been great for us. What I can say is that we noticed a massive improvement when we replaced our STX-16803 of the observatory with the C4-16000. Calibrating the latter is very difficult, but the SNR increase was vast, to say the least, simply because it is more or less impossible to expose the CCD in the 16803 for long enough to get significantly over the read noise. With that said, here is my understanding of the entire concept. Signal is the amount of electrons in the sensor generated by the actual light from space we want to capture. Or at least that's the signal we want. Electrons are also generated by dark current and light pollution. That's the signal we don't want, but we can subtract it (dark calibration and background extraction). However, all 3 of these components generate Poisson noise. And this can not be subtracted. An additional, fixed component is the read noise generated by the readout. This does not scale with exposure time. So we can choose the exposure time such that all other noise components massively exceed the read noise (some people use 95%, some 99%). This is a very important factor and in my honest opinion only images for which this is the case should be compared. I can't do the maths here because you didn't mention the exact camera model :) Usually the CCDs have much higher read noise, therefore the newer CMOS cameras are much more practical in terms of exposure lengths, which is another advantage. But let's ignore that for a second and assume that we choose the exposure length necessary for the camera with the higher read noise (probably the CCD) for both cameras. Then we can assume that we mostly eliminated the factor of the read noise for the measurement. (BTW: For Bortle 5 and a 3nm filter in my experience 300s is too short, even for the IMX455/571). Let's further assume that ALL other conditions are the same: Same bandpass, same sky quality, same optical parameters. Then we still have the factor of the dark current to worry about. For the IMX455/571 if you cool down to 0 or -10° the dark current is sufficiently low to be ignored, even with narrowband exposures at f/6 under mediocre skies (Bortle 5). Also also measured this myself. Can't say anything about your mysterius CCD though ;) (This is also the reason why the image at Gain 0 has more noise than the image with Gain 100. The exposure time was the same, but the read noise was much higher). So finally, also ignoring read noise, the only thing left under the outlined same conditions, is the quantum efficiency. But as you pointed out the difference is marginal. I don't think it plays a role here at all, it especially can't explain the difference you see. But here we come to the hard part. I understand that the factors which I assumed to be equal are in fact not, and that you assume that most of them in your test case are working in favor of the CMOS here. And I tend to agree. But that still leave sky quality and especially the filters. I really, really strongly agree with @CuivTheLazyGeek on this one. Narrowband filters with 3nm bandpass of certain more affordable manufacturers tend to be very off. That's why we verify all these filters we buy for the observatory ourselves using our high resolution spectrograph. I guess I don't need to explain why the weather is important. Transparency and moisture, especially combined with light pollution, all this good stuff can easily change the signal by the amount you see here. In the end you explain the difference in SNR you see with different pixel sizes. I don't know how exactly how your SNR calculation works. So maybe you can give some pointers whether the number of pixels influences the calculation of the SNR somehow. Note that I'm referring to the number of pixels here, not the pixel size, for the following reason: Assuming the exposure length was choosen long enough, i.e. such that the read noise becomes insignificant compared to the Poisson noise, and the temperature of the sensor is low enough, such that the Poisson noise of the dark current can be neglected compared to the Poisson noise of the light pollution and actual signal, the entire noise in the image only depends on exactly these two factors and the quantum efficiency of the sensor. In other words: The flux density is the same and if we integrate over the same area we get the same flux which is to be multiplied by the quantum efficiency and then converted to ADU using the gain. In this scenario only the quantum efficiency influences the actual signal and therefore via the Poisson process also the SNR. So in my opinion the comparison you are showing is unfortunately very flawed. What you showed here is that observing conditions, other equipment (scope, filters) and the technical knowledge of the observer are maybe even more important than some detail of the camera parameters. Additionally, all the measurements I took with CCD cameras and CMOS cameras clearly indicate that, in my personal experience, CMOS cameras are really much better than the CCDs. I'm very much looking forward to be proven wrong though! In the coming night I'll take an exposure of 208 seconds with my ASI6200 which should be equivalent to a 300s exposure with a f/6 scope. (In my expierence, with my 6nm Ha filter and my conditions of ~Bortle 5, this exposure is waaaaay too short to overcome the read noise anyway, so I still think this comparison is not correct). Meanwhile, I'd like to give you a 600s exposure I already have of that region. Framing is a bit odd because it's part of a larger mosaic, but anyway. Other parameters: - scope: FSQ106ED @f/5 (native) - Filter: Astronomik 6nm - Gain 100 - Offset 50 If you need other parameters or calibration frames please let me know. I heared that youtube removes URLs, but here goes nothing: www.sternwarte.uni-erlangen.de/~weber/NGC7000.fits
Hi @halloduda8142 ! Thanks for the response. I'll try to respond to your points, but forgive me if I miss something. There's a lot in your comment! You can also email me at deepskydetail at gmail if you'd like. 1) Massive improvement when we replaced our STX-16803 of the observatory with the C4-16000 because of low read noise: - It's my experience that read noise is very low on the ladder when it comes to SNR of long exposures. Could the difference be related to qe of the cameras? Looking at the STX-16803, the qe is around 60% at 500nm. I couldn't find the specs for the STX-16803. Since both have the same size pixel size, they should be even in that regard. A lot of CMOS cameras DO have better qe than CCD cameras, but I don't think that's always the case (like my SX-825 being comparable to the ZWO 2600MM) 2) Signal is the amount of electrons in the sensor generated by the actual light from space we want to capture [....]So we can choose the exposure time such that all other noise components massively exceed the read noise. - I agree with all of this! Also, My camera is a SX-825 by the way. It's read noise is 3.5 e-, which is slightly worse than the ZWO at 0 gain (3.25), and about double 1.25 at 100 gain. I don't actually think the read noise affected very much the SNR calculations in this video. Even if it were 10 e- (3x worse), the SNR would still be considerably better than the ZWO in my example. 3) Narrowband filters with 3nm bandpass of certain more affordable manufacturers tend to be very off. That's why we verify all these filters we buy for the observatory ourselves using our high resolution spectrograph. -Yeah, I agree it very well could be the filters! I can't rule that out given all the other factors that I couldn't control in the test! 4) So maybe you can give some pointers whether the number of pixels influences the calculation of the SNR somehow [....] the Poisson noise of the light pollution and actual signal, the entire noise in the image only depends on exactly these two factors and the quantum efficiency of the sensor. In other words. - Pixel size is the important factor here, although we probably agree but are just thinking about things differently. I agree with most the noise being determined by the DSO signal strength and LP signal in long, cooled exposures. The SNR calculation is going to be determined by the brightness of the pixel divided by its square root. Assuming the same scope is used, a bigger pixel is going to collect more light than a smaller one. If a pixel is 2x as big, then it collects 4x the amount of light. Its shot noise increases by 2 (i.e., the sqrt of 4), so the overall SNR increases by 2. Instead of 4 little noisy pixels in that area, you have one big one that has better SNR. The difference in pixel size between the ZWO and SX-825 is 2.69um. In other words, the SX pixels are 1.71x bigger. Everything being equal, each SX pixel is collecting 2.94x the light. But it is generating 1.7x noise. The SNR is 2.94/1.71 = 1.71. The actual difference in SNR between the SX and ZWO for one sub was about 1.5 in the SX's favor. Given that the ZWO was using a faster scope, that can pretty much explain the difference in SNR without assuming anything about the filters. 5) Additionally, all the measurements I took with CCD cameras and CMOS cameras clearly indicate that, in my personal experience, CMOS cameras are really much better than the CCDs. - I actually agree with you on this! Most CMOS cameras nowadays are a lot better than older CCDs (but some CCDs were really good!). But I think the main factor here is that on average the qe has increased with the CMOS. The read noise is a smaller factor, and it helps but for some of the newer CCDs they're actually ok. The thing about the ZWO 2600/6200MM that holds them back is their qe in Ha, and their pixel size. Binning can take care of the pixel size, but the qe in Ha is pretty disappointing I think. Maybe my title is a bit clickbaity, which led to confusion! 6) In the coming night I'll take an exposure of 208 seconds with my ASI6200 which should be equivalent to a 300s exposure with a f/6 scope. (In my expierence, with my 6nm Ha filter and my conditions of ~Bortle 5, this exposure is waaaaay too short to overcome the read noise anyway, so I still think this comparison is not correct). -Very cool! I'm looking to be proven wrong too :) Also, different parts of an image have different brightness, so we can experiment with brighter areas and dimmer areas. The dimmer areas are one area where read noise may matter a bit more. I'm looking forward to the results!!
I had the ASI294MM and although I really liked it for its versatility 2X2 bin etc. The flats were a bit of nightmare. Variable ADU at short exposures really hurt this camera. I ended up selling it and sticking with my trusted 2600MC. I really want to get back into mono but not sure I want APS-C or the bigger FF. With the new European USB-C rules I wonder if ZWO will be forced to re-engineer the cameras.
I do enjoy my 294mm too. But I agree that flats with the 294MM's flats are difficult. It's hard to get longer flat exposure times, especially in L. A lot of times, I end up having to redo them!
@Ben_Stewart @deepskydetail Don't stress too much over the flats, find a flat panel with extremely dim setting, set NINA to the recommended ADU value with a maximum deviation of 2% and let the app shoot your flats with dynamic exposure. Mine are around 10s of exposure. Guess what, they correct the lights so well that the SPCC in pixinsight sometimes doesn't even do much since the channels are already properly corrected by the flats. This camera indeed goes nuts at short flats or bias frames, so f that, give her what she needs: long flats and dark flats instead of bias frames. Been using it for over an year and I love the results. The small pixels start becoming a little problematic at my 1200mm focal length, but so far deconvolution does take care of the oversampling pretty well, boosting the sharpness of the image. I would not recommend this camera to an even deeper focal length tho.
Since I have got my L-Ultimate filter I have been wondering why I seem to see much more O3 than Ha and I blamed 12:11 it on the filter. Now I know it is the camera 2600mc.... I have to push it to gain 300 at 300s exposures because of my F10 8HD.... I am wondering if I am wrong pushing it that much.
There is also the possibility that OIII emission line is within the visible spectrum, and affected by light pollution quite a bit. I think that could be a big reason OIII is much brighter, but I could be wrong! What's nice about the SNR calculator app is that you can do a couple tests with different gain settings to see how it affects SNR :)
This is a strange video, sir. The concept you seem to be discovering here is called “sampling” (see also over- and under-sampling) and it represents a trade-off in potential resolution of detail relative to signal vs noise. And you can shift this ratio with binning, or, in post-processing, by using a tool like Integer Resample in PixInsight. No real downside here… but rather more flexibility. The 294’s case is kinda shot by its dark current and the complexity in getting a truly clean (not just looks clean) calibration with narrowband long exposures. I’ve not been able to get better signal vs noise from my 294MM vs my 2600MM, bin2 on the 294MM (i.e. grouping the quad bayer sub-arrays) vs Bin1 on the 2600MM, despite this configuration also lending to the 294MM an advantage in larger “pixels.”
Thanks for the comment. Could you please point in the video where I claim to be discovering something new 😉 Around 11:05-11:40 I talk specifically about sampling and the tradeoffs of resolution/sensitivity. The point of the video is to figure out why some of my viewers were having a hard time getting good SNR with the camera. It didn't start out as a video about sampling. Sampling just came about naturally as I was looking at the data from the two cameras' images. It's interesting about your comparison of the 294 and the 2600. If you have any data you'd like to share, I'd be more than happy to take a look!
I don't think it's the pixel size, pixel size doesn't impact the image's overall SNR if you're only counting shot noise since the light gathered is the same. That's why if you pick two mirrorless with different megapixel count and same sensor sizes they'll perform very similarly, except for a small advantage to the low mpx camera due to less sources of read noise. I think there are too many variables between the two systems to draw conclusions, the test should be done with the same setup. Edit: also, binning has zero benefit to a CMOS image SNR, it works on a CCD because the charge of a 2x2 group of pixels is added and physically measured just once so it reduces read noise, while in the CMOS you're still reading every single pixel and just averaging the values mathematically after,.so not only there is no benefit to the SNR but you're also throwing away resolution.
Thanks for the comment! Some responses: 1) Pixel size does matter as well. Let's consider just the light gathering ability, as you mention. -Let's say a given area on the imaging circle gets 4 photons of light on average each sub-exposure. If there is only one pixel (and of course only considering shot noise and a perfect imaging setup), the SNR is going to be 4/sqrt(4) or 2. The average SNR of the pixel is 2 (2 divided by 1 pixel). If there are 4 pixels, then the each pixel will have an SNR of 1 (signal = 1, noise = sqrt(1), SNR = 1). The average SNR of the four pixels is 1 (1+1+1+1 divided by 4 = 4/4 = 1). The 4 pixels are going to be noisier than the 1 big one. Instead of 1 big pixel with 1 value, you have 4 small pixels with slightly different values. That will look noisier. 2) I think there are too many variables between the two systems to draw conclusions, the test should be done with the same setup. -I agree! I hope in the future to do follow up tests. I tried to make it clear that the conclusions and test is flawed and I need more data. 3) binning has zero benefit to a CMOS image SNR This is, as far as I know, not true. You still get the benefit of adding together the signal. However, as you mention you add in the read noise of all the pixels instead of just one pixel. But read noise in newer cameras are pretty small, so you still benefit like you do with CCD binning. Altair has a good explanation here about CMOS binning: www.altairastro.help/info-instructions/cmos/how-does-binning-work-with-cmos-cameras/
@@deepskydetail afaik since they're 4 independent and unrelated sources of noise, where you're averaging shot noise from adjacent pixels you should sum thoise noise sources in quadrature. That means that a single pixel gets 4 signal/√4 noise =2. 4 pixels with 1 noise each will get 1*4 / √(1²*4) = 2. It's the same, which makes sense since shot noise is only dependent on the amount of light and nothing else. Still works for bigger numbers (just pointing out since 1 squared might look weird): with 100 signal on 1 pixel you get √100= 10 noise, with 25 signal on 4 pixels (each with 5 shot noise) you get 25*4/√( 5² *4) ) = 10. Again, you can check for example on the DPReview comparison that you have very small difference between an A7RIV with 60mpx and an A7III which only has 24, if shot noise changed the result would be vastly different (I'll link in another comment in case stupid RUclips blocks the link). The problem with CMOS binning is that you're not gaining a real benefit in SNR, mathematically it's exactly the same as simply reducing the image size, which can be done in post after you stacked the full resolution files, so there is no reason to bin a CMOS unless you need faster transfer speed during acquisition (which is definitely not the case for deep sky astro
@@deepskydetail www.dpreview.com/reviews/image-comparison?attr18=lowlight&attr13_0=sony_a7iv&attr13_1=sony_a7iii&attr13_2=sony_a7riii&attr13_3=sony_a7riv&attr15_0=raw&attr15_1=raw&attr15_2=raw&attr15_3=raw&attr16_0=6400&attr16_1=6400&attr16_2=6400&attr16_3=3200&attr126_2=1&attr199_3=1&normalization=compare&widget=1&x=0.086367625548513&y=-0.14081228556828976 Here is the comparison, of course the A7III wins because read noise is lower, but that's not even a single sotp difference even with pixels that are almost 3 times smaller in area.
That's a good question! Mine was taken from a Bortle 7/8 zone. I'm not sure about the viewer's image tbh. That's another reason to take these results with a grain of salt! Although the Ha filters should help even things up, I would think.
@@deepskydetail I am fortunate to live in bortle 4 skies and am amazed how much integration time is needed in 6+ conditions. I use a skywatcher 150P for widefield, 150mm MAK amd C11 for narrow.
@@daveheider6081 I'm always amazed at the effects of light pollution too, and how it just destroys the fainter parts of the image. Bortle 4 is nice! I'm right now in a Bortle 6-7, and it feels like heaven compared to a Bortle 9 I lived in a few years back!
I appreciate you doing these comparisons. However, I think you are conflating pixel SNR with target SNR or more precisely SNR per fixed unit of sky area. Yes a smaller pixel camera will have a lower SNR per pixel, but if it has four times as many pixels, you can create an equivalent with a camera with 1/4 of the number of pixels because four pixels can be averaged together, and it will double the SNR at the same pixel scale of the camera with larger pixels. This is an extremely important point: you need to normalize to fixed pixel scale. Finally comparing examples that are taken on different nights or with different equipment is really difficult because sky clarity can change dramatically. I’ve had examples where the sky clarity seemed somewhat similar, but the single noise ratio from two different nights was almost 2X different. Target altitude and moon phase are also very important factors as well as the background light pollution. And finally when comparing different filters, it’s not just the width of the filter but different filters have different peak band passes at Ha, although I admit this factor is typically a 10 to 15% difference but still that’s a difference. The biggest two differences are that you need to normalize for a fixed area of the sky or fix pixel scale and if you’re gonna compare two different captures, they need to be done with the same equipment on the same evening.
Great comment! About the different setups/nights, I completely agree with you, as I stated a few times in the video, the comparison is quite flawed and I really want more data do test things with. That being said, most of the variables that I do have data for related to the equipment, moon phases etc. are in the viewer's favor. About the image scale, I also agree with you, which is why I mentioned binning and the tradeoffs with resolution at the end of the video ;) Thanks!
@@deepskydetailThanks for the response. Perhaps you can go back and normalize for pixel scale and see how that impacts your results and update your video to keep it as up to date with your latest thinking as possible. Bummer about your injury, I hope you heal quickly. I think this is critical because I see this being missed by many people and even web sites. I think it would be a huge service to the AP community for them to know that in order to compare systems you need to compare at equivalent pixel scale.
That's a good idea!! If I were to guess, I'd think that the results will show that the SNR per unit area is very similar for both cameras (one has slightly higher qe, the other has lower read noise). I also think that with digital images, the overall SNR of the image will change how good it looks, and pixel size is one (out of many) important factors to consider in the overall SNR. Even if the SNR per unit area is the same, an image with bigger pixels might get better overall image SNR faster than one with smaller pixels, which of course is why binning might be considered. Sorry for the rambling, but I guess what I'm saying is an image using bigger pixels, all things equal, will get a smoother looking image faster than one with smaller pixels. It's the human perception at the end of the day that will make the judgment. The tradeoff, of course, is resolution (i.e. start zooming in on the bigger pixel image, and things might start getting blocky).
@@deepskydetail actually this has been debated on cloudynights extensively, and the general consensus is that you can always downscale the higher resolution image to the equivalent pixel scale and improve the SNR, even after stacking, and so there is really no advantage of the larger pixel scale camera unless you are suffering from read noise. The only reason a larger pixel scale camera is better is to swamp read noise more quickly but as you know, CMOS cameras have very low noise so this is unlikely to be true in this case even with smaller pixels. I think if you wrk the math you’ll find this to be true. So really the main difference is down to Ha sensitivity.
@stevenmiller5452 I see what you're saying, and I agree! That does make sense the way you've explained it. The thing is, generally, the people who have contacted me using these CMOS cameras aren't binning, downsampling etc., and they are wondering why the SNR is so bad (and consequently why it takes them so long to get an image they want). They could downsample. They could bin. And that's ok! It's in the video as a solution! But their expectation (based on what I think is marketing) is "the cameras should be faster" by default (i.e., without binning/downsampling), and they're worried something is wrong (when it really isn't).
I have both zwo 533's. I've noticed my biggest hit I take is for S2. I had a 294mc pro before the 533's and I can see the difference. I am wanting a 294mm pro so I get two pixel sizes effectively. I can't wait to see a 47M pixel Ha sub from one of my small refractors.
Yeah, the qe for these chips (e.g., 533, 2600 etc) do drop off quite a bit. SII is around 50% qe. You can still have two pixel sizes with your 533 though by binning! Bin 2 mode on the 294MM I think is just 2x2 binning from what I understand (with some gain setting manipulation going on!). Also, seeing and aperture size also influence resolution! Let me know if you get the full 47M (I've actually never tried it myself; I generally stick to bin2 mode)!
You need to understand EMVA1288. You haven’t discussed system gain at all, what offsets and gains are ZWO using before they give you an image? Raw is never raw.
Thanks for the comment! Let me know if I'm misunderstanding something, but I think the video does address those issues. I do discuss gain for the first half of the video. ZWO allows users to choose offset and gain within astro-imaging software, and the SNR calculator I used subtracts the offsets and gain through calibration frames based on methodology found here: www.cloudynights.com/articles/cat/column/fishing-for-photons/signal-to-noise-part-3-measuring-your-camera-r1929 If there is something else I'm missing, let me know! I'm always eager to learn, especially if I'm doing something wrong!
@@deepskydetail Hi, sorry, I probably sounded a bit abrupt there. The process above largely follows the protocol designated in EMVA1288, however, it is easy to assume "gain" is the gain setting on your camera UI rather than the manufacturers gains. We can look at the EMVA report for each of the Sony sensors which made used in a variety of cameras, but each camera will give a very different result. This is because each manufacturer will set their own analogue gains, offsets, pixel masking, artefact management processing etc. I think you already know this... ... but, the setup defined at cloudy nights misses some critical factors which will wildly affect the outcome of your result: 1) wavelength of light. EMVA specifies a single wavelength with a FHWM of ~30nm from memory. This is because the QE response of your sensor varies hugely across the spectrum. You could use a monochromator to analyse each wavelength, but that is practically difficult. 2) Light source EMVA specifies a disk shaped source larger than the diagonal of sensor at dia*8 distance away. There is a very specific reason for this, and the result vary considerably if this is changed even slightly. 3) the cloudy nights method suggests white paper for the flats - again, spectrally this will change the light source and renders a camera comparison only valid if both cameras were under the exact same conditions (temperature etc too) and ultimately this is not how the cameras are used...if you want to compare, it should be somehow related to the wavelengths you will use for astro work. This is the archives heel of EMVA - it does not in any way compare cameras in a way which is related to the application (astro or otherwise). If done correctly, it is the best we have for pure sensor comparison (@ a single wavelength) but doesn't take into account lens / telescope transmission, emission wavelengths and anything else in the total imaging "system. There are many other points, but this is a long enough comment.. What would be more interesting is to create a database of imaging systems (camera with known scopes, Barlows, eyepieces etc) against a known object (moon). All subjective still due to seeing (which is again subjective) but possibly far more useful to amateur astrophotography. I hope that gives some info worth considering.
I should have also mentioned the above mentions gain "is the conversion rate between the raw numbers you get out of the camera (ADU or Analog Digital Units) and actual electrons". This is misleading. It is the amplification of signal (minus read noise, but including shot and Kt noise) with all manufacturer corrections included + quantisation noise.
What cameras do/did you have if you don't mind me asking? I think that generally the newer CMOS cameras have less read noise. But the CCDs tended to have bigger pixels.
@deepskydetail I'm sorry, I should have been clear. First digital cameras from fujifilm. They switched to cmos for costs. Hopefully my new move shoot move will help.
The Starlight Camera is the SX-825. The manual for the camera is here, which is where I got most of the specs from: www.sxccd.com/wp-content/files/Trius-PRO-825-handbook-1.pdf. I should have put that in the video. Sorry for the oversight!
None of the products in this video are sponsored by any company. I do not receive products to review from manufacturers in order to keep these types of videos as honest as possible. You can support me at buymeacoffee.com/deepskydetail to help keep the channel independent!! :)
Really enjoying your videos for some time now. It is great when you are quick to disclose when comparisons aren't fair, or the analysis flawed. Your conclusions are carefully weighted. Best of luck with your shoulder recovery.
Thank you, so much!! :)
Just wanted to let you know I shout you out in my latest video that will come out today. Thanks for your great work!
Thanks for the shoutout! I appreciate it :)
"My arm is in a sling because some guy at the bar said 'Zwo cams are best', and I had to teach the brute a lesson. Sure, my arm got broken, but you should see HIM."
lol!😂 I would definitely be the one getting the short end of the stick in that scenario!
I use colour cameras. I had the ASI294MC and traded up to the 2600MC. For photographing galaxies, my exposure time went right down, and the quality went right up. I'm not really a nebula photographer, so my life is not dominated by H-alpha. I also see the bigger chip size as a significant benefit of CMOS over CCD.
Thanks for the comment! The bigger for is amazing, I think!
Really interesting subject Mark! Glad to see you’re well and on the mend!
Cheers!
Doug
Thanks, Doug! :)
Great video Hope you heal well sooner than later.
Thank you!!
Very cool comparison. You have very good out of the box ideas as always hahaha many thanks friend! I wish you a good healing
Thank you!
Hello! There has been some heated discussions related to QE and sensitivity graphs. If you look at the newest small chip, the IMX585 it shows a much greater sensitivity to Ha and near IR than the 533 or 2600. Granted, it's a small sensor, but it may be capturing more than the others, while being smaller.
Yes, that's a good point! :)
Get well soon, and please be very careful to avoid frozen shoulder. My orthopedic surgeon failed to warn me about this after I had shoulder surgery, and I developed it myself. It turned into nearly 18 months of hell with lots of pain and little sleep.
Thank you! My doctor hasn't said anything about that. I'll definitely check it out to try to avoid it.
I'll take your advice as directed: with a grain of salt. Yes, binning can help, but typically only if your calculated arc"/pixel falls below about 1/1. I do find that on both my 72mm f4.9 and 102mm f7 units, a dither and drizzle (2x) provides a dense, nicely resolved image, albeit with the occasional artifact. As for CCD vs CMOS, I've only compared it with non-astro cameras, but found the CCD sensor to offer a bit more color contrast. As a result, my ancient Canon 8 M pixel S95, has become my favorite street camera, over a larger 1" CMOS unit. Though that comment is outside the scope of your video, I think it may point to agreement with your findings....
That's pretty interesting about your Canon camera. And of course, binning isn't always the best plan of action, depending on the pixel scale. Thanks for the comment!
The filter is extremely important here. 3nm is very restrictive, but not necessarily "better". The slightest shift in its bandpass can have drastic consequences and most popular (ie not costing more than the instrument) brands suffer from significant variability.
100% true! Great point, and Cuiv made a similar comment :)
It's very interesting though that the difference in SNR corresponds pretty well to the difference we'd expect based on pixel size!
wow! - interesting
Thanks for the info 👍
Thanks for watching!
Hi and thanks for this video! Well done! I would like to add something to Your considerations and I hope is useful to solve the "mystery". As You stated there is a different quantum efficiency for a given wavelenght and we can see that in the Q.E. curve of the sensors. There is too much hype, on my opinion, reguard to the tech specs of the cameras, specially on the readout noise side. You can only compare two sensors that are sampling the same angle of the sky per pixel since the Photometry is ruling and not the system. A fixed flux of photons per second per angle considered exist and You can compare different systems only on that same angle of sampling. The old CCD's used to have a much more precise electronics and it was a single point of lecture of the datas. Today's CMOS have a DA converter for each pixel and they are not engineered for "precise" measurement but for fast reading. When You look at the many graphs published for the CMOS, they enrich the low readout noise of the CMOS compared to the CCD's but it is really true? CCD's used to have a fixed gain, calculated for to optimize dynamics. Nowdays changeable gains on CMOS have the effect to "change the readout noise" and that is very low compared to the CCD's. The problem in changing the gain is that You count less and less elctrons in a bigger and bigger block. Since the full well is not changeable and it depends on the intrinsic physics of the silica, the bigger the block, the less the precision of the lecture and the smaller is the usable full well. Consider a CCD with a 5e readout noise on a 50k full well, it is 0.01% of "noise" better to say uncertanty while 3e on a 10k full well is 0.03% of noise, three times more! Another thing to consider is the declared full well... In reality it only depends of the area of the silica used for the photoelectric effect. a 3.7x3.7 microns area of silica, even if it is perfect can't retain more than 12/16ke, this is physics not adverticement... Mabye tha CMOS has a memory and it reads the pixel when is closing to be full and retain the information for the final sum (buffering) but it is not true full well. In the end we are using sensors which are not even colose engineered for precise measurement like the CCDs but for different industrial applications. We are using these sensors becouse they are cheap compared to CCDs and also 'cos they are no more producing CCDs. in the end is not the absolute number You have to look but the percentage and the precision of the readed data. The uncertanty is "noise" in digital since we are no more taking an image, we are making a measure instead and how precise is it is the SNR which translate in contrast when we chose to represent these datas as an image. Remember tha they are just numbers in the digital world or "datels" and You can play them as a "sound" if You will, they will sound like an horrible noise to Your ears but they still are EXACTLY Your datas!
Thanks for the comment! I think there is some great information here. I agree that the angle and sampling of the sky are super important here, and it's a major problem in comparing the two images. I'm in the process of doing more rigorous tests, and all the conclusions in this video are very tentative!
About the CMOS vs. CCD, this is really good info. Would you have some sources for me to look at and study this more? I've never thinking about the read noise to well depth ratio before, and it make sense on an intuitive level. Let's see if I'm understanding this: If you expose until the maximum without overexposing, you're read noise as a percentage of the actual signal is less than that of a CMOS camera? That's because in order to get the low read noise value in a CMOS camera, you're actually losing dynamic range, decreasing the full well depth? Anecdotally, one thing I noticed about my Starlight Xpress 825 was that it does seem to be less noisy than my 294MM, even though the read noise is greater in the CCD. I just felt that the images were smoother somehow, but I might be wrong, and it might be due to pixel size. Much to learn, and if you have some sites/books/articles to share, please do! You can always email me at deepskydetail at gmail.com!
@@deepskydetail Thanks alot! It was my only purpose to stimulate curiosity about that and I hope it useful... On the first I would like to point out that we are no more "taking pictures" but "making measurements" whit digital cameras. Our intent is to measure, as best as possible, the intensity of the energy (emitted in the form of photons) of a subject. Since our camera is an array of "micro-sensors" that we call pixels even if a pixel is something different in digital world. Anyway we can only make a measure of the total energy that falls in to a single pixel, per every pixel in a given time. We chose to represent these different energies in form of light reproduced from a monitor i.e. and this is quite obvious since we are capturing an emitted "light". Anyway this point out that, in a perfect world, we can only get that total energy and nothing more, no matter the system; in this case our best result is to get that exact energy.
When we take such "measurements" we get some kind of errors in the process that alters the "real" data we want to capture. What we call "noise" in digital is exactly that, How much datas is certain of the source we intend to measure and what is not? That is the SNR which is more correct to call: certain Vs. uncertain.
Readout noise is part of the electronic noise and is something related to the system that we can't measure or take out from the equation since is not fixed. When we say 5e of readnoise it can be +5 to -5 but can even be 0! The fact is that is different each time and we havo no way to measure it. Other form of shot noise we can deal with in some forms, i.e. dark current noise. We can take dark frames and then subtract these from the image to "correct" the values from the measurements but we can't remove the uncertanty of the readnoise from the dark frames too so another small error is contributing the final measurement (that's why the professionals use cryogenic cooling to cameras and they not take flats at all). Anyway the main difference from CCD's and CMOS is they're purpose, the first ones (CCD's where been invented to be memories for the computer at the beginning) where made for taking precise measurements and they involve complex electronics and clocking systems to work. For this reason they needs alot of energy to work. CMOS on the other end where born for industrial uses and they didn't need high degree of precision. At first they where used as counters or triggers for industrial machines. imagine a conveyor belt with a lot of cans of Campbell's soup, a CMOS underneth the belt could see the darkening of the lights for each can passing and count them. These kind of applications where the ones intended for CMOS use. Today we are too fiew users for the producing companies to think about making a CMOS sensor as precise as a CCD for measurements. We could be 1000000 people but they make 8 billions CMOS per year just for the mobile phone industry!
Anyway we are getting smarter since we don't really take pictures anymore but we make statistics instead! That's why we are getting better and better even with sensors that are not made for our intent.
It is important to underestand that the readnoise is an error we can't remove but we can bring to a certain value that we are confortable to accept and this is true for all the total noise that we can measure or deal with in some ways. As a fixed number in a range it is indeed a percentage and, higher full wells means that this is in a smaller and smaller percentage, the uncertanty on very low full wells becomes higher and higher as a percentage of the whole signal.
We are adapting ourselves to smaller pixel sizes and this becomes even convenient for us 'cos we need shorter focal lenght to achieve the same spatial resolution per pixel respect to CCD's but at the cost of precision of the data and full well. We don't have the chance to know what any CMOS converter do behind the lines since this is an industrial secret, i.e. we nowdays take flats just for correcting the "vignetting" of the system but, whit CCD's we are also correcting variations of sensitivity or discrepancies from one pixel to another! This becouse the CMOS still have a map inside the electronics that correct this value but: how much is precise that correction? We don't know and we cannot change that. All these kind of trickery was not present at all in the CCD's. In the end that's mainly why Your CCD's frames seems more "smooth" to You, becouse they are! The data has an higher degree of precision.
Sorry, I wrote too much...
Anyway these are two great sources to read if You will:
CMOS/CCD Sensors and Camera Systems, Second Edition. Author(s): Gerald C. Holst, Terrence S. Lomheim
Photon Transfer Author(s): James R. Janesick
@@deepskydetail Sorry but I would like to make another, mabye useful, consideration...
There is another difference between CCDs and nowdays CMOS and is the quantization noise. The CCD has a true 16bit converter while all actual CMOS are double reading 14bit that means they read the data twice ( and I don't know if double readings is also doubling the readout noise...) with a different gain converter and interpolate the results to "obtain" a 16bit approximation. Try to shot a good dynamic subject with large light differences with the same scope, sampling angle and same lenght single exposure that is quite long i.e. 300secs (And You will probably going to saturate the CMOS not becouse is much sensitive but 'cos it has a smalle full well) with CCD and CMOS and than look at the histogram... The CMOS one will be much "smaller" and "spiky" than the CCD one. Try then to increase the non linearity of the representation on fixed steps on both images and You will see that, in the CCD, You will retain more and more "tones" and capacity of distinguish more light differences while the CMOS will "appear" to be in a very small area with less tones to show. This is another effect of the precision of lecture of the datas...
Thanks for the info and sources! I now really want to test those ideas, like comparing the histograms. Sounds like some ideas for new videos ;)
A also checked your astrobin images, and was blown away! Great images!!
That are Somme very interesting points!
Actually, I’m also a bit confused by the IMX571 cameras.
I’m comparing it to my old IMX183 camera. Which has tiny pixels.
BUT! If I compare both on my celestron RASA 8 at f2.0 and SW 130PDS f5.0 I’m not sure what ist better. On nebula targets the 183 seems to be not very good. On the other hand, the 571 is perfect for broad band targets like dark nebula.
But I do not have the deep knowledge to compare these cameras and telescopes like you.
So, keep going!!
Thanks for the comment! There are so many things to try to consider with these cameras, I feel I could do a whole series on them because there is so much that I just don't know. If you do want to compare, feel free sending me an email!
Hi Mark, just a question about bigger pixels ... You also have an ASI294mm Pro with even smaller pixels but a good QE!?
Please comment about the 294MM which i also have and love!
I'm working on a video about the 294 right now as a follow up. Stay tuned!
I hope you get better soon so hope your clouds aren't to clear until you're fit again lol, I'm guessing using OSC cameras the sensitivity is much reduced as my other half has an Atik 314 + with a pixel size of 6.45, in its day it was a good camera but she now uses an Altair 269C with a pixel size of 3.3 & it's just better. The same for myself as I have an old QHY8 OSC with a pixel size of 7.8 but is no where as sensitive as the Altair 26C, I tried a 12nm dual band filter in front of it & it couldn't see anything to get focus so it's consigned to collecting dust right now.
Thanks! Great comment! I agree, you have to look at a lot of things. A lot of the older CCD cameras are quite a bit worse than the 2600/6200mm when it comes to quantum efficiency. I don't doubt that they can't see anything with filters! Some of the Starlight cameras have quite a bit better qe (like my little SX-825 beating the 2600MM in qe or Ha).
A bit bigger pixel size would help with some of the CMOS cameras imo. But, of course, with the CMOS cameras, you can still bin.
And of course, the field of view due to larger sensor size is also really important, and so you don't have to do mosaics a lot anymore!
If you look at scientific grade cameras most of them have substantially larger pixels than the modern consumer-grade cmos chip. I believe its for this reason. They're less worried about resolution and more about SNR.
I think that makes sense, to me. Also, a lot of professional telescopes have quite long focal lengths. Bigger pixels also help to match the resolution. I think you're 100% right that pixel size is important to getting good SNR at long focal ratios and large aperture! It reminds me of an astro imaging channel live where they talked about why you'd want bigger aperture scopes.
Yeah, thinking of a planewave or something like it.
To be fair, ZWO does offer the 2400MC-Pro FF camera with 5.94u pixels. Of course that is OSC and not mono...but still bigger pixels! =D
Yes! I was thinking about the 2400, and almost wanted to say "make this one in mono!"
The real pixel size of the mono pixels without Bayer is half the 5.9 um. Sony would need to develop another chip with bigger pixels. Would be a significant evolution of the current fullframe mono cameras and depends on the chip producer, but astro is for them not very interesting.
I did not know that without the Bayer, the size is half, but it makes sense! I wish Sony would get more interested in astro!
How can one talk about SNR without mentioning 1) exact light pollution numbers, 2) Moon phase and distance, 3) exposure length, to determine contribution of read noise?
Great comment! Here's some information to help clarify things. Most of the unknowns you mentioned are in the viewers' favor though, so I don't think those factors explain the difference in SNR measured.
1) Exact light pollution is an unknown for the comparison, but the Ha filters should get rid of most of this effect. The viewer's Ha filter is better than mine (edit: maybe?? Cuiv brought up some good points). If anything this should increase SNR for the viewer's image. Additionally, the SNR calculator app tries to measure light pollution and subtract that out of the calculation.
2) Moon phases: mine was taken on October 10, 2019, 90% illumination. The viewer's was taken around July 4, 2024, almost a new moon (2.88% illumination). Another advantage for the viewer's image.
3) 8:31 I put in that both subs were 300 seconds. The SNR calculator takes into account read noise.
The fact that the CCD's SNR measurement is about 1.7 times more than the CMOS's, and the CCD's pixels are about 1.5 times bigger (close to the theoretically correct number when accounting for scope differences) is either a really weird coincidence, or pixel size is a really good explanation for the increase we observe.
@@deepskydetail It seems like the DSO shot noise is the dominant part of the overall noise in your measurement/calculation, because 1.7 times larger pixels will indeed produce 1.7 times higher SNR, everything else being equal (1.7 times larger pixel -> 1.7^2 times more signal -> sqrt(1.7^2) = 1.7 times larger SNR.) This tells me that, for your measurement of noise, you are choosing a relatively bright part of the nebula where DSO shot noise dominates. To me personally, those parts of the nebula have never been very interesting because they are bright and will look just fine with reasonably long integration time. It's the dimmest parts, where light pollution shot noise and read noise dominate, that's where SNR measurements/calculations are most interesting. Case in point: Goofi's Imaging Challenge at Cloudy Nights is the Squid nebula for this month. My first session was 4 hours at f/7, 3.76 micron pixels, 3 nm Oiii filter, Bortle 5.5, 240s subs. SNR of the brightest part of the Squid was 2 after 4 hours. Now that will require some serious integration time :).
You're right. I did choose a brighter part of the nebula because it was easier to make sure I got the exact same area. The fainter parts are definitely where it's at! It looks like I'll need to do a follow up video eventually!! I wonder whether the increase in signal can actually offset the read noise in those areas.
These are a couple of several very important variables to control in a valid controlled experiment. I’ll also include sky clarity (which can vary significantly over time) and target altitude. I know this may not be practical, but really there are only two ways to do this, either side by side imaging rigs, which I’ve seen done, or simulate the results (or differences) with a robust model. I know, it’s complicated… which is why it’s rarely dine robustly.
Easy, because he’s talking about the sensitivity of the sensor, the light source is relevant.
Get well soon!
Thank you!!
Interesting. Well I just bought an IMX571 cooled camera. I still think it will be better than the DSLR I was using (I hope)
I would think it is better!
Get well soon mate! So the results are not super surprising to me since as you mention towards the end, you can bin, trading resolution for SNR. A 2x2 binned pixel on the IMX571 sensor still likely has much less read noise than that of a pixel on the SX.
But I honestly don't think that's it, nor is it the QE (although that certainly contributes) - there is a decent likelihood that it is simply the filter not performing up to spec, especially if it's from *brand name redacted whose very narrowband filters tend to be hit or miss without the end user being able to tell*.
It's also possible that combining low QE + 3nm bandpass + potentially defective filter + not so fast optics used + Bortle zone likely better than mine, this sub exposure simply doesn't swamp the read noise (although I would need to actually compute stuff to know for sure, the probability is low).
Super cool potential issue to bring to light though!
I thought about it since your videos about filters popped up. However, I think he is up to something. I always thought why my oxygen seem to be always brighter than Ha. This sensitivity at frequencies graph completely explains it. I thought it's my filter's fault L-Ultimate, but actually it may be the camera 2600MC... although, I would still like to have my L-Ultimate tested😊
Thanks, Cuiv! I agree, those are all possibilities, and I cannot rule them out (I don't actually know the brand that the viewer used, so it could be a defective filter like you mentioned). But I keep coming back to the fact that the CCD's SNR was about 1.5-2.0x better than the CMOS's, which is right in the range we'd expect based on the difference in pixel size. Plus, I've received similar feedback from more than one viewer about the camera.
It's a very parsimonious answer, but of course it might not be correct!!
@luboinchina3013 I'd be interested in a test of your filter too!
@@deepskydetail Good day! I wasn't sure in which thread to reply, but this seems the most appropriate one. First of all, I really wish you the best with your shoulder!
I have a lot of thoughts on all this. Most of the issues were already brought up, but the filter is also in my opinion the most relevant one. Let's just say things haven't been great for us.
What I can say is that we noticed a massive improvement when we replaced our STX-16803 of the observatory with the C4-16000. Calibrating the latter is very difficult, but the SNR increase was vast, to say the least, simply because it is more or less impossible to expose the CCD in the 16803 for long enough to get significantly over the read noise.
With that said, here is my understanding of the entire concept. Signal is the amount of electrons in the sensor generated by the actual light from space we want to capture. Or at least that's the signal we want. Electrons are also generated by dark current and light pollution. That's the signal we don't want, but we can subtract it (dark calibration and background extraction). However, all 3 of these components generate Poisson noise. And this can not be subtracted. An additional, fixed component is the read noise generated by the readout. This does not scale with exposure time. So we can choose the exposure time such that all other noise components massively exceed the read noise (some people use 95%, some 99%). This is a very important factor and in my honest opinion only images for which this is the case should be compared. I can't do the maths here because you didn't mention the exact camera model :)
Usually the CCDs have much higher read noise, therefore the newer CMOS cameras are much more practical in terms of exposure lengths, which is another advantage. But let's ignore that for a second and assume that we choose the exposure length necessary for the camera with the higher read noise (probably the CCD) for both cameras. Then we can assume that we mostly eliminated the factor of the read noise for the measurement. (BTW: For Bortle 5 and a 3nm filter in my experience 300s is too short, even for the IMX455/571).
Let's further assume that ALL other conditions are the same: Same bandpass, same sky quality, same optical parameters. Then we still have the factor of the dark current to worry about. For the IMX455/571 if you cool down to 0 or -10° the dark current is sufficiently low to be ignored, even with narrowband exposures at f/6 under mediocre skies (Bortle 5). Also also measured this myself. Can't say anything about your mysterius CCD though ;) (This is also the reason why the image at Gain 0 has more noise than the image with Gain 100. The exposure time was the same, but the read noise was much higher).
So finally, also ignoring read noise, the only thing left under the outlined same conditions, is the quantum efficiency. But as you pointed out the difference is marginal. I don't think it plays a role here at all, it especially can't explain the difference you see.
But here we come to the hard part. I understand that the factors which I assumed to be equal are in fact not, and that you assume that most of them in your test case are working in favor of the CMOS here. And I tend to agree. But that still leave sky quality and especially the filters. I really, really strongly agree with @CuivTheLazyGeek on this one. Narrowband filters with 3nm bandpass of certain more affordable manufacturers tend to be very off. That's why we verify all these filters we buy for the observatory ourselves using our high resolution spectrograph.
I guess I don't need to explain why the weather is important. Transparency and moisture, especially combined with light pollution, all this good stuff can easily change the signal by the amount you see here.
In the end you explain the difference in SNR you see with different pixel sizes. I don't know how exactly how your SNR calculation works. So maybe you can give some pointers whether the number of pixels influences the calculation of the SNR somehow. Note that I'm referring to the number of pixels here, not the pixel size, for the following reason: Assuming the exposure length was choosen long enough, i.e. such that the read noise becomes insignificant compared to the Poisson noise, and the temperature of the sensor is low enough, such that the Poisson noise of the dark current can be neglected compared to the Poisson noise of the light pollution and actual signal, the entire noise in the image only depends on exactly these two factors and the quantum efficiency of the sensor. In other words: The flux density is the same and if we integrate over the same area we get the same flux which is to be multiplied by the quantum efficiency and then converted to ADU using the gain. In this scenario only the quantum efficiency influences the actual signal and therefore via the Poisson process also the SNR.
So in my opinion the comparison you are showing is unfortunately very flawed. What you showed here is that observing conditions, other equipment (scope, filters) and the technical knowledge of the observer are maybe even more important than some detail of the camera parameters.
Additionally, all the measurements I took with CCD cameras and CMOS cameras clearly indicate that, in my personal experience, CMOS cameras are really much better than the CCDs. I'm very much looking forward to be proven wrong though!
In the coming night I'll take an exposure of 208 seconds with my ASI6200 which should be equivalent to a 300s exposure with a f/6 scope. (In my expierence, with my 6nm Ha filter and my conditions of ~Bortle 5, this exposure is waaaaay too short to overcome the read noise anyway, so I still think this comparison is not correct).
Meanwhile, I'd like to give you a 600s exposure I already have of that region. Framing is a bit odd because it's part of a larger mosaic, but anyway.
Other parameters:
- scope: FSQ106ED @f/5 (native)
- Filter: Astronomik 6nm
- Gain 100
- Offset 50
If you need other parameters or calibration frames please let me know.
I heared that youtube removes URLs, but here goes nothing: www.sternwarte.uni-erlangen.de/~weber/NGC7000.fits
Hi @halloduda8142 ! Thanks for the response. I'll try to respond to your points, but forgive me if I miss something. There's a lot in your comment! You can also email me at deepskydetail at gmail if you'd like.
1) Massive improvement when we replaced our STX-16803 of the observatory with the C4-16000 because of low read noise:
- It's my experience that read noise is very low on the ladder when it comes to SNR of long exposures. Could the difference be related to qe of the cameras? Looking at the STX-16803, the qe is around 60% at 500nm. I couldn't find the specs for the STX-16803. Since both have the same size pixel size, they should be even in that regard. A lot of CMOS cameras DO have better qe than CCD cameras, but I don't think that's always the case (like my SX-825 being comparable to the ZWO 2600MM)
2) Signal is the amount of electrons in the sensor generated by the actual light from space we want to capture [....]So we can choose the exposure time such that all other noise components massively exceed the read noise.
- I agree with all of this!
Also, My camera is a SX-825 by the way. It's read noise is 3.5 e-, which is slightly worse than the ZWO at 0 gain (3.25), and about double 1.25 at 100 gain. I don't actually think the read noise affected very much the SNR calculations in this video. Even if it were 10 e- (3x worse), the SNR would still be considerably better than the ZWO in my example.
3) Narrowband filters with 3nm bandpass of certain more affordable manufacturers tend to be very off. That's why we verify all these filters we buy for the observatory ourselves using our high resolution spectrograph.
-Yeah, I agree it very well could be the filters! I can't rule that out given all the other factors that I couldn't control in the test!
4) So maybe you can give some pointers whether the number of pixels influences the calculation of the SNR somehow [....] the Poisson noise of the light pollution and actual signal, the entire noise in the image only depends on exactly these two factors and the quantum efficiency of the sensor. In other words.
- Pixel size is the important factor here, although we probably agree but are just thinking about things differently. I agree with most the noise being determined by the DSO signal strength and LP signal in long, cooled exposures. The SNR calculation is going to be determined by the brightness of the pixel divided by its square root. Assuming the same scope is used, a bigger pixel is going to collect more light than a smaller one. If a pixel is 2x as big, then it collects 4x the amount of light. Its shot noise increases by 2 (i.e., the sqrt of 4), so the overall SNR increases by 2. Instead of 4 little noisy pixels in that area, you have one big one that has better SNR.
The difference in pixel size between the ZWO and SX-825 is 2.69um. In other words, the SX pixels are 1.71x bigger. Everything being equal, each SX pixel is collecting 2.94x the light. But it is generating 1.7x noise. The SNR is 2.94/1.71 = 1.71.
The actual difference in SNR between the SX and ZWO for one sub was about 1.5 in the SX's favor. Given that the ZWO was using a faster scope, that can pretty much explain the difference in SNR without assuming anything about the filters.
5) Additionally, all the measurements I took with CCD cameras and CMOS cameras clearly indicate that, in my personal experience, CMOS cameras are really much better than the CCDs.
- I actually agree with you on this! Most CMOS cameras nowadays are a lot better than older CCDs (but some CCDs were really good!). But I think the main factor here is that on average the qe has increased with the CMOS. The read noise is a smaller factor, and it helps but for some of the newer CCDs they're actually ok. The thing about the ZWO 2600/6200MM that holds them back is their qe in Ha, and their pixel size. Binning can take care of the pixel size, but the qe in Ha is pretty disappointing I think. Maybe my title is a bit clickbaity, which led to confusion!
6) In the coming night I'll take an exposure of 208 seconds with my ASI6200 which should be equivalent to a 300s exposure with a f/6 scope. (In my expierence, with my 6nm Ha filter and my conditions of ~Bortle 5, this exposure is waaaaay too short to overcome the read noise anyway, so I still think this comparison is not correct).
-Very cool! I'm looking to be proven wrong too :) Also, different parts of an image have different brightness, so we can experiment with brighter areas and dimmer areas. The dimmer areas are one area where read noise may matter a bit more. I'm looking forward to the results!!
Thanks for an interesting video, as always :)
Wish you a speedy recovery ❤️🩹
Thank you!
I had the ASI294MM and although I really liked it for its versatility 2X2 bin etc. The flats were a bit of nightmare. Variable ADU at short exposures really hurt this camera. I ended up selling it and sticking with my trusted 2600MC. I really want to get back into mono but not sure I want APS-C or the bigger FF. With the new European USB-C rules I wonder if ZWO will be forced to re-engineer the cameras.
I do enjoy my 294mm too. But I agree that flats with the 294MM's flats are difficult. It's hard to get longer flat exposure times, especially in L. A lot of times, I end up having to redo them!
@Ben_Stewart @deepskydetail
Don't stress too much over the flats, find a flat panel with extremely dim setting, set NINA to the recommended ADU value with a maximum deviation of 2% and let the app shoot your flats with dynamic exposure. Mine are around 10s of exposure. Guess what, they correct the lights so well that the SPCC in pixinsight sometimes doesn't even do much since the channels are already properly corrected by the flats. This camera indeed goes nuts at short flats or bias frames, so f that, give her what she needs: long flats and dark flats instead of bias frames. Been using it for over an year and I love the results. The small pixels start becoming a little problematic at my 1200mm focal length, but so far deconvolution does take care of the oversampling pretty well, boosting the sharpness of the image. I would not recommend this camera to an even deeper focal length tho.
Since I have got my L-Ultimate filter I have been wondering why I seem to see much more O3 than Ha and I blamed 12:11 it on the filter. Now I know it is the camera 2600mc.... I have to push it to gain 300 at 300s exposures because of my F10 8HD.... I am wondering if I am wrong pushing it that much.
There is also the possibility that OIII emission line is within the visible spectrum, and affected by light pollution quite a bit. I think that could be a big reason OIII is much brighter, but I could be wrong! What's nice about the SNR calculator app is that you can do a couple tests with different gain settings to see how it affects SNR :)
This is a strange video, sir. The concept you seem to be discovering here is called “sampling” (see also over- and under-sampling) and it represents a trade-off in potential resolution of detail relative to signal vs noise. And you can shift this ratio with binning, or, in post-processing, by using a tool like Integer Resample in PixInsight. No real downside here… but rather more flexibility.
The 294’s case is kinda shot by its dark current and the complexity in getting a truly clean (not just looks clean) calibration with narrowband long exposures. I’ve not been able to get better signal vs noise from my 294MM vs my 2600MM, bin2 on the 294MM (i.e. grouping the quad bayer sub-arrays) vs Bin1 on the 2600MM, despite this configuration also lending to the 294MM an advantage in larger “pixels.”
Thanks for the comment. Could you please point in the video where I claim to be discovering something new 😉
Around 11:05-11:40 I talk specifically about sampling and the tradeoffs of resolution/sensitivity. The point of the video is to figure out why some of my viewers were having a hard time getting good SNR with the camera. It didn't start out as a video about sampling. Sampling just came about naturally as I was looking at the data from the two cameras' images.
It's interesting about your comparison of the 294 and the 2600. If you have any data you'd like to share, I'd be more than happy to take a look!
I don't think it's the pixel size, pixel size doesn't impact the image's overall SNR if you're only counting shot noise since the light gathered is the same.
That's why if you pick two mirrorless with different megapixel count and same sensor sizes they'll perform very similarly, except for a small advantage to the low mpx camera due to less sources of read noise.
I think there are too many variables between the two systems to draw conclusions, the test should be done with the same setup.
Edit: also, binning has zero benefit to a CMOS image SNR, it works on a CCD because the charge of a 2x2 group of pixels is added and physically measured just once so it reduces read noise, while in the CMOS you're still reading every single pixel and just averaging the values mathematically after,.so not only there is no benefit to the SNR but you're also throwing away resolution.
Thanks for the comment! Some responses:
1) Pixel size does matter as well. Let's consider just the light gathering ability, as you mention.
-Let's say a given area on the imaging circle gets 4 photons of light on average each sub-exposure. If there is only one pixel (and of course only considering shot noise and a perfect imaging setup), the SNR is going to be 4/sqrt(4) or 2. The average SNR of the pixel is 2 (2 divided by 1 pixel).
If there are 4 pixels, then the each pixel will have an SNR of 1 (signal = 1, noise = sqrt(1), SNR = 1). The average SNR of the four pixels is 1 (1+1+1+1 divided by 4 = 4/4 = 1).
The 4 pixels are going to be noisier than the 1 big one. Instead of 1 big pixel with 1 value, you have 4 small pixels with slightly different values. That will look noisier.
2) I think there are too many variables between the two systems to draw conclusions, the test should be done with the same setup.
-I agree! I hope in the future to do follow up tests. I tried to make it clear that the conclusions and test is flawed and I need more data.
3) binning has zero benefit to a CMOS image SNR
This is, as far as I know, not true. You still get the benefit of adding together the signal. However, as you mention you add in the read noise of all the pixels instead of just one pixel. But read noise in newer cameras are pretty small, so you still benefit like you do with CCD binning. Altair has a good explanation here about CMOS binning: www.altairastro.help/info-instructions/cmos/how-does-binning-work-with-cmos-cameras/
@@deepskydetail afaik since they're 4 independent and unrelated sources of noise, where you're averaging shot noise from adjacent pixels you should sum thoise noise sources in quadrature.
That means that a single pixel gets 4 signal/√4 noise =2. 4 pixels with 1 noise each will get 1*4 / √(1²*4) = 2. It's the same, which makes sense since shot noise is only dependent on the amount of light and nothing else. Still works for bigger numbers (just pointing out since 1 squared might look weird): with 100 signal on 1 pixel you get √100= 10 noise, with 25 signal on 4 pixels (each with 5 shot noise) you get 25*4/√( 5² *4) ) = 10.
Again, you can check for example on the DPReview comparison that you have very small difference between an A7RIV with 60mpx and an A7III which only has 24, if shot noise changed the result would be vastly different (I'll link in another comment in case stupid RUclips blocks the link).
The problem with CMOS binning is that you're not gaining a real benefit in SNR, mathematically it's exactly the same as simply reducing the image size, which can be done in post after you stacked the full resolution files, so there is no reason to bin a CMOS unless you need faster transfer speed during acquisition (which is definitely not the case for deep sky astro
@@deepskydetail www.dpreview.com/reviews/image-comparison?attr18=lowlight&attr13_0=sony_a7iv&attr13_1=sony_a7iii&attr13_2=sony_a7riii&attr13_3=sony_a7riv&attr15_0=raw&attr15_1=raw&attr15_2=raw&attr15_3=raw&attr16_0=6400&attr16_1=6400&attr16_2=6400&attr16_3=3200&attr126_2=1&attr199_3=1&normalization=compare&widget=1&x=0.086367625548513&y=-0.14081228556828976 Here is the comparison, of course the A7III wins because read noise is lower, but that's not even a single sotp difference even with pixels that are almost 3 times smaller in area.
Hate to be that guy, but what's the track that starts at about 7:13 or so? I've heard it a few times and I really like it!
Be that guy! No shame :) This one? ruclips.net/video/T9IXodtjRgs/видео.html
Heaven and Hell by Jeremy Blake (it's in the YT music library)
Very interesting! What bortle scale skies do you have and the fellow who provided his data?
That's a good question! Mine was taken from a Bortle 7/8 zone. I'm not sure about the viewer's image tbh. That's another reason to take these results with a grain of salt! Although the Ha filters should help even things up, I would think.
@@deepskydetail I am fortunate to live in bortle 4 skies and am amazed how much integration time is needed in 6+ conditions. I use a skywatcher 150P for widefield, 150mm MAK amd C11 for narrow.
@@daveheider6081 I'm always amazed at the effects of light pollution too, and how it just destroys the fainter parts of the image. Bortle 4 is nice! I'm right now in a Bortle 6-7, and it feels like heaven compared to a Bortle 9 I lived in a few years back!
Too bad there isn't a mono camera with the imx410.
The 294mm stacks up pretty well. Mine has been great with my 8" sct.
Yeah, it'd be great if there was a mono version of the 2400. Also agree about the 294mm. It's underrated, I think!
I appreciate you doing these comparisons. However, I think you are conflating pixel SNR with target SNR or more precisely SNR per fixed unit of sky area. Yes a smaller pixel camera will have a lower SNR per pixel, but if it has four times as many pixels, you can create an equivalent with a camera with 1/4 of the number of pixels because four pixels can be averaged together, and it will double the SNR at the same pixel scale of the camera with larger pixels. This is an extremely important point: you need to normalize to fixed pixel scale. Finally comparing examples that are taken on different nights or with different equipment is really difficult because sky clarity can change dramatically. I’ve had examples where the sky clarity seemed somewhat similar, but the single noise ratio from two different nights was almost 2X different. Target altitude and moon phase are also very important factors as well as the background light pollution. And finally when comparing different filters, it’s not just the width of the filter but different filters have different peak band passes at Ha, although I admit this factor is typically a 10 to 15% difference but still that’s a difference. The biggest two differences are that you need to normalize for a fixed area of the sky or fix pixel scale and if you’re gonna compare two different captures, they need to be done with the same equipment on the same evening.
Great comment! About the different setups/nights, I completely agree with you, as I stated a few times in the video, the comparison is quite flawed and I really want more data do test things with. That being said, most of the variables that I do have data for related to the equipment, moon phases etc. are in the viewer's favor.
About the image scale, I also agree with you, which is why I mentioned binning and the tradeoffs with resolution at the end of the video ;) Thanks!
@@deepskydetailThanks for the response. Perhaps you can go back and normalize for pixel scale and see how that impacts your results and update your video to keep it as up to date with your latest thinking as possible. Bummer about your injury, I hope you heal quickly. I think this is critical because I see this being missed by many people and even web sites. I think it would be a huge service to the AP community for them to know that in order to compare systems you need to compare at equivalent pixel scale.
That's a good idea!! If I were to guess, I'd think that the results will show that the SNR per unit area is very similar for both cameras (one has slightly higher qe, the other has lower read noise). I also think that with digital images, the overall SNR of the image will change how good it looks, and pixel size is one (out of many) important factors to consider in the overall SNR. Even if the SNR per unit area is the same, an image with bigger pixels might get better overall image SNR faster than one with smaller pixels, which of course is why binning might be considered.
Sorry for the rambling, but I guess what I'm saying is an image using bigger pixels, all things equal, will get a smoother looking image faster than one with smaller pixels. It's the human perception at the end of the day that will make the judgment. The tradeoff, of course, is resolution (i.e. start zooming in on the bigger pixel image, and things might start getting blocky).
@@deepskydetail actually this has been debated on cloudynights extensively, and the general consensus is that you can always downscale the higher resolution image to the equivalent pixel scale and improve the SNR, even after stacking, and so there is really no advantage of the larger pixel scale camera unless you are suffering from read noise. The only reason a larger pixel scale camera is better is to swamp read noise more quickly but as you know, CMOS cameras have very low noise so this is unlikely to be true in this case even with smaller pixels. I think if you wrk the math you’ll find this to be true. So really the main difference is down to Ha sensitivity.
@stevenmiller5452 I see what you're saying, and I agree! That does make sense the way you've explained it. The thing is, generally, the people who have contacted me using these CMOS cameras aren't binning, downsampling etc., and they are wondering why the SNR is so bad (and consequently why it takes them so long to get an image they want). They could downsample. They could bin. And that's ok! It's in the video as a solution! But their expectation (based on what I think is marketing) is "the cameras should be faster" by default (i.e., without binning/downsampling), and they're worried something is wrong (when it really isn't).
I have both zwo 533's. I've noticed my biggest hit I take is for S2. I had a 294mc pro before the 533's and I can see the difference.
I am wanting a 294mm pro so I get two pixel sizes effectively. I can't wait to see a 47M pixel Ha sub from one of my small refractors.
Yeah, the qe for these chips (e.g., 533, 2600 etc) do drop off quite a bit. SII is around 50% qe. You can still have two pixel sizes with your 533 though by binning! Bin 2 mode on the 294MM I think is just 2x2 binning from what I understand (with some gain setting manipulation going on!). Also, seeing and aperture size also influence resolution! Let me know if you get the full 47M (I've actually never tried it myself; I generally stick to bin2 mode)!
You need to understand EMVA1288. You haven’t discussed system gain at all, what offsets and gains are ZWO using before they give you an image? Raw is never raw.
Thanks for the comment! Let me know if I'm misunderstanding something, but I think the video does address those issues. I do discuss gain for the first half of the video. ZWO allows users to choose offset and gain within astro-imaging software, and the SNR calculator I used subtracts the offsets and gain through calibration frames based on methodology found here: www.cloudynights.com/articles/cat/column/fishing-for-photons/signal-to-noise-part-3-measuring-your-camera-r1929
If there is something else I'm missing, let me know! I'm always eager to learn, especially if I'm doing something wrong!
@@deepskydetail Hi, sorry, I probably sounded a bit abrupt there. The process above largely follows the protocol designated in EMVA1288, however, it is easy to assume "gain" is the gain setting on your camera UI rather than the manufacturers gains. We can look at the EMVA report for each of the Sony sensors which made used in a variety of cameras, but each camera will give a very different result. This is because each manufacturer will set their own analogue gains, offsets, pixel masking, artefact management processing etc. I think you already know this...
... but, the setup defined at cloudy nights misses some critical factors which will wildly affect the outcome of your result:
1) wavelength of light. EMVA specifies a single wavelength with a FHWM of ~30nm from memory. This is because the QE response of your sensor varies hugely across the spectrum. You could use a monochromator to analyse each wavelength, but that is practically difficult.
2) Light source EMVA specifies a disk shaped source larger than the diagonal of sensor at dia*8 distance away. There is a very specific reason for this, and the result vary considerably if this is changed even slightly.
3) the cloudy nights method suggests white paper for the flats - again, spectrally this will change the light source and renders a camera comparison only valid if both cameras were under the exact same conditions (temperature etc too) and ultimately this is not how the cameras are used...if you want to compare, it should be somehow related to the wavelengths you will use for astro work. This is the archives heel of EMVA - it does not in any way compare cameras in a way which is related to the application (astro or otherwise). If done correctly, it is the best we have for pure sensor comparison (@ a single wavelength) but doesn't take into account lens / telescope transmission, emission wavelengths and anything else in the total imaging "system.
There are many other points, but this is a long enough comment..
What would be more interesting is to create a database of imaging systems (camera with known scopes, Barlows, eyepieces etc) against a known object (moon). All subjective still due to seeing (which is again subjective) but possibly far more useful to amateur astrophotography.
I hope that gives some info worth considering.
I should have also mentioned the above mentions gain "is the conversion rate between the raw numbers you get out of the camera (ADU or Analog Digital Units) and actual electrons". This is misleading. It is the amplification of signal (minus read noise, but including shot and Kt noise) with all manufacturer corrections included + quantisation noise.
My ccd cameras had so much less noise. Cmos sux.
What cameras do/did you have if you don't mind me asking? I think that generally the newer CMOS cameras have less read noise. But the CCDs tended to have bigger pixels.
@deepskydetail I'm sorry, I should have been clear. First digital cameras from fujifilm. They switched to cmos for costs. Hopefully my new move shoot move will help.