Deep Sky Detail
Deep Sky Detail
  • Видео 86
  • Просмотров 215 687
𝐌𝐲 𝐨𝐥𝐝 𝐂𝐂𝐃 𝐰𝐢𝐧𝐬 𝐭𝐡𝐞 𝐒𝐍𝐑 𝐛𝐚𝐭𝐭𝐥𝐞 𝐚𝐠𝐚𝐢𝐧. 𝐖𝐡𝐚𝐭'𝐬 𝐠𝐨𝐢𝐧𝐠 𝐨𝐧?
In my last video, I did a flawed comparison between my Starlight Xpress and 2600MM pro camera testing their SNR. The Starlight CCD came out on top compared to the CMOS I tested. But, there were too many confounding variables to be sure. In this video, I conducted an experiment with my 294MM camera and the Starlight to see if I can replicate the results. And... I found some interesting things!
When working on this video, I talked to @CuivTheLazyGeek a bit, and he was really kind and informative about discussing filters (one of the things that could have affected the previous video's results). While I don't specifically talk about filters in this video, the conversation was super helpful, an...
Просмотров: 1 246

Видео

2x the SNR with older CCDs? Why some CMOS cameras aren't as "sensitive" and why it might not matter
Просмотров 7 тыс.Месяц назад
Want to get the best SNR possible with your telescope setup? Well, you might be forgetting one important factor... pixel size and quantum efficiency! My old CCD camera beat the newer ZWO 2600MM and 6200MM pro cameras at collecting light. In this video, I show you what probably happened! Become a buymeacoffee member to help choose which products I review: www.buymeacoffee.com/deepskydetail Affil...
𝗠𝘆 (𝗳𝗿𝗲𝗲) 𝗮𝗽𝗽 𝗰𝗮𝗻 𝘁𝗲𝗹𝗹 𝘆𝗼𝘂! And, it 𝗰𝗮𝗻 𝗰𝗮𝗹𝗰𝘂𝗹𝗮𝘁𝗲 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗳𝗶𝗹𝘁𝗲𝗿𝘀 :)
Просмотров 5 тыс.Месяц назад
Have you ever finished an imaging session, and asked yourself, "Hmmm... I wonder how much more integration time/sub exposures I need to get a good image?" In this video, I'll show you my web application that can help you figure that out! Astrophotography answers are sometimes complicated. But my free web app can help make things simpler. deepskydetail.shinyapps.io/Calculate_SNR/ Become a buymea...
𝐘𝐞𝐬, 𝐚𝐧𝐝 𝐈 𝐜𝐚𝐧 𝐩𝐫𝐨𝐯𝐞 𝐢𝐭. (𝐀𝐧𝐝 𝐬𝐨 𝐝𝐨𝐞𝐬 𝐠𝐚𝐢𝐧).
Просмотров 13 тыс.2 месяца назад
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepSkyDetail/ You’ll also get 20% off an annual premium subscription. Have you ever seen a comparison of two different stacked images with different sub-exposures and was thought, "those don't look that different?" Well, they might be really different. In this video, I'll prove that sub-exposure length matter...
But you can decrease it. Here's why. Introduction to Deconvolution Part II
Просмотров 3,6 тыс.3 месяца назад
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepSkyDetail/ You’ll also get 20% off an annual premium subscription. Ever wonder why people talk about "PSFs" in astrophotography? Well, a PSF makes your image blurry, so your can never have a perfect image. I go into detail about what can affect a point spread function in this video. Become a buymeacoffee m...
The best option for astro image capture?? The RPi5 is almost deliciously good. First impressions.
Просмотров 4,5 тыс.3 месяца назад
Tired of using a laptop for controlling your telescope and astrophotography equipment? Maybe it's time for a piece of the Pi! The Raspberry Pi 5 is a great alternative to pricier, more restrictive options. It consumes less power. And it might be able to do planetary imaging as well as capturing deep sky objects! Here are my initial thoughts of two setups, a Vilros starter kit, and NVME hat with...
Herbig-Haro Objects Are Hard to Capture... But They're Awesome :)
Просмотров 1,2 тыс.4 месяца назад
Herbig-Haro objects are really cool! The show us where new stars are born! I shot M78 for 20 hours to get a couple HH objects (there are 17 in the area I imaged), and I only really only got one! But I still think it is cool :) #astrophotography Become a buymeacoffee member to help choose which products I review: www.buymeacoffee.com/deepskydetail Anton's Video @whatdamath ruclips.net/video/rOFS...
Drawing with circles... and its relation to image processing
Просмотров 2,1 тыс.4 месяца назад
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepSkyDetail/ You’ll also get 20% off an annual premium subscription. Ever wonder why people talk about "frequencies" and photographs/astrophotography images? Here's an intuitive explanation for what frequencies are! Image processing doesn't have to be difficult if you know the right terms! Some great videos ...
How internet trolls almost won.
Просмотров 3934 месяца назад
I went to Dallas for the eclipse. But... there were supposed to be clouds. Was I able to get a good image? #astrophotography Become a buymeacoffee member to help choose which products I review: www.buymeacoffee.com/deepskydetail Affiliate Links: Skywatcher HEQ5 Mount amzn.to/3NVrVQi Skywatcher 80mm F7.5 Doublet amzn.to/47mbfZi ZWO EFW mini filter wheel amzn.to/48OxhoN Optolong LRGB Filter Set (...
Update on new sharpening/deconvolution tool for astro images...
Просмотров 2 тыс.5 месяцев назад
Update on new sharpening/deconvolution tool for astro images...
I've been working on something special...
Просмотров 1,7 тыс.5 месяцев назад
I've been working on something special...
Debunking the myth about light pollution, and understanding how it affects SNR.
Просмотров 6 тыс.6 месяцев назад
Debunking the myth about light pollution, and understanding how it affects SNR.
Plans for the future. 1 year anniversary on YT!
Просмотров 8616 месяцев назад
Plans for the future. 1 year anniversary on YT!
Transforming Monochrome Images into RGB with Ha, OIII, and SII. Pixelmath Magic in Siril.
Просмотров 1,9 тыс.6 месяцев назад
Transforming Monochrome Images into RGB with Ha, OIII, and SII. Pixelmath Magic in Siril.
DIY Personalized Bahtinov Mask! Create it in free software and print it yourself!
Просмотров 1,4 тыс.6 месяцев назад
DIY Personalized Bahtinov Mask! Create it in free software and print it yourself!
He may have saved astrophotography for us... (Reupload with better audio)
Просмотров 1,8 тыс.6 месяцев назад
He may have saved astrophotography for us... (Reupload with better audio)
Perfection or Deception? Challenging Astrophotography. Part 3 of "Is Astrophotography Real?"
Просмотров 1,7 тыс.7 месяцев назад
Perfection or Deception? Challenging Astrophotography. Part 3 of "Is Astrophotography Real?"
Astrophotography Explained: Decoding Signal to Noise Ratio and Number of Sub Frames
Просмотров 6 тыс.7 месяцев назад
Astrophotography Explained: Decoding Signal to Noise Ratio and Number of Sub Frames
Edit My Narrowband Data for Free! Siril and Gimp Astrophotography Tone Mapping Tutorial.
Просмотров 2,2 тыс.8 месяцев назад
Edit My Narrowband Data for Free! Siril and Gimp Astrophotography Tone Mapping Tutorial.
How Much Better is an Ha Filter than No Filter! It's Scientific-esque! Bortle 6/7 Test
Просмотров 4,2 тыс.8 месяцев назад
How Much Better is an Ha Filter than No Filter! It's Scientific-esque! Bortle 6/7 Test
Free diffraction spikes by @AstroAF! Installation on Windows.
Просмотров 1,1 тыс.8 месяцев назад
Free diffraction spikes by @AstroAF! Installation on Windows.
Beware of the hype. Unpaid reviews can be problematic... sometimes
Просмотров 1,4 тыс.9 месяцев назад
Beware of the hype. Unpaid reviews can be problematic... sometimes
Sharpens stars independently! Is AstroSharp still FREE!? Newest Update Out Now!
Просмотров 6 тыс.9 месяцев назад
Sharpens stars independently! Is AstroSharp still FREE!? Newest Update Out Now!
Why No One Agrees on Color in Astrophotography. Is Astrophotography Real Part 2: Context and Color
Просмотров 1,4 тыс.10 месяцев назад
Why No One Agrees on Color in Astrophotography. Is Astrophotography Real Part 2: Context and Color
Confessions of a bad astrophotographer....
Просмотров 2,6 тыс.10 месяцев назад
Confessions of a bad astrophotographer....
Make RGB stars look good with Ha! Gimp processing astrophotography tutorial.
Просмотров 2,2 тыс.11 месяцев назад
Make RGB stars look good with Ha! Gimp processing astrophotography tutorial.
Free AI Denoising for Astro Images? This is not the current version...
Просмотров 10 тыс.11 месяцев назад
Free AI Denoising for Astro Images? This is not the current version...
Process Saturn Quickly and Easily in Gimp!
Просмотров 1,3 тыс.Год назад
Process Saturn Quickly and Easily in Gimp!
Is Astrophotography Real? Part I Color
Просмотров 2 тыс.Год назад
Is Astrophotography Real? Part I Color
Let's measure the SNR! How much better is a a dark site than the city?
Просмотров 7 тыс.Год назад
Let's measure the SNR! How much better is a a dark site than the city?

Комментарии

  • @anata5127
    @anata5127 25 минут назад

    Get QHY530 ProII scientific camera. It will solve your problems.

  • @tostativerdk
    @tostativerdk 11 часов назад

    As always, very interesting, thanks for going into great detail and running actual tests.

  • @janelubenskyi1177
    @janelubenskyi1177 13 часов назад

    To my understanding…when using CMOS cameras keeping the gain as low as possible but within the range provided in the specs for your particular camera and keeping the histogram at about 1/3 from the way from the left bar by adjusting the the exposure time accordingly, goes a long way to lowering the noise problem…also of course the more images you have to a point also is a factor with good and adequate number of calibration frames.

    • @deepskydetail
      @deepskydetail 13 часов назад

      Yes, I agree with all that! Also, binning/pixel size affect the signal part of the SNR measurement, so you'll be able to get the histogram about 1/3 away from the left bar more quickly. It should decrease the integration time needed to get a similar result than if you have smaller pixels/don't bin.

  • @comeraczy2483
    @comeraczy2483 15 часов назад

    Interesting. I would have thought that with CMOS (as opposed to CCD) there is one amp for each pixel and therefore that it doesn't matter if the binning is done by the camera driver at capture time or in software afterwards (except of course for the larger files). If that's true, why put the emphasis on SNR per pixel? If that's not true, any idea why, and what the difference is?

    • @deepskydetail
      @deepskydetail 15 часов назад

      I'm pretty sure you're correct that it doesn't matter when you do the binning for a CMOS camera (it was mentioned at the end). So, maybe it should be more clear that the video is about binning and pixel size (and how they're similar). To answer your question: most of the noise in an image should be shot noise (whether it comes from the DSO or light pollution). Increasing pixel size (however it is done- binning is one way) will take the signal of multiple smaller areas and combine them. That's where the benefit comes in. People often forget the numerator/signal in the SNR calculation. Combining pixels, or having larger pixels increases the "S" relative to the "R," which helps a lot with SNR. But the tradeoff is resolution.

    • @deepskydetail
      @deepskydetail 14 часов назад

      I guess another way to think about it is that with a 2x2 bin and a CMOS camera, it's like stacking 4 pixels together (assuming equal illumination). On the other hand, if you had a CMOS camera with 2x the pixel size, it would be just as good (if not better b/c of less read noise) as the 2x2 bin in terms of SNR. Of course, with smaller pixels, you get to choose whether to bin or not, so there is more flexibility :)

    • @comeraczy2483
      @comeraczy2483 9 часов назад

      @@deepskydetail thanks for taking the time to clarify. Sadly, I still don't understand why the SNR of single pixels is important enough to lose resolution by binning. We both have the same answer your question at 5:15: No, the signal of that DSO is the same, regardless of binning! So, why should I lose resolution on my DSO by binning the pixels if we both agree that this won't change the SNR of the DSO? Sure, by binning, I'll have big blocky and averaged pixels, but how can this possibly make the DSO look prettier? Sorry if I am daft, but what am I missing?

    • @v0ldy54
      @v0ldy54 6 часов назад

      @@comeraczy2483 there is zero benefit from doing binning on CMOS sensor, the only reason you'd do it is in case you need faster file transfer rate from the camera or smaller files, but from a quality standpoint binned images on CMOS actually end up looking worse than normal ones.

  • @ghillan
    @ghillan 16 часов назад

    I'm missing something at 5:49 .. Why the noise of the single pixel is 7.07? if the "single pixel camera" had noise=10, if we divide the same sensor in 2 pixel, the total noise of all pixels should still be 10 ( we didn't changed the sensor tecnology, but we halved the area where the noise could occur). Assuming that the noise is more or less equally distributed, then the "noise per pixel" = "total noise " / number of pixels = 10/2= 5.

    • @deepskydetail
      @deepskydetail 16 часов назад

      This is a good question. There is a weird thing about Poisson shot noise. It doesn't split the same way the signal does. The standard deviation (i.e., noise measurement) only depends on the mean of the distribution. It is the square root of the signal. So, in this case if we have a pixel with a signal of 50, the noise becomes the square root of 50, or roughly 7.07.

    • @v0ldy54
      @v0ldy54 6 часов назад

      @@deepskydetail if we're calculating the overall SNR however you need to sum the noise in quadrature, so that would be: SIGNAL*NrOfSources/sqrt(Noise^2*NrOfSources) -> 50*2/sqrt(7,07^2*2)=10, just like a single pixel with 100 signal.

    • @deepskydetail
      @deepskydetail 6 часов назад

      ​@@v0ldy54 We're not finding the SNR of the whole area though. We're finding the SNR of the individual pixels. That's what we're interested in, especially in very faint areas. If you take two images, one without binning, and the other with 2x2 binning, the 2x2 will be brighter. If you stretch the unbinned one to be as bright as the binned one, it will look noisier. You can actually see the difference around 13:00. The SX (and binned ZWO) is smoother (but less resolution) and the right one is noisier (with more resolution). I tried to equalize brightness and keep the stretching the same. The graph at 13:49 shows the actual calculated SNR for the 2x2 bin and unbinned ZWO, which confirms it.

    • @v0ldy54
      @v0ldy54 5 часов назад

      @@deepskydetail ok, but the final image should be compared to the same output size , so it's the SNR over the same area of the final image that matters.

  • @gregmac8268
    @gregmac8268 16 часов назад

    This is very informative and easy to understand

    • @deepskydetail
      @deepskydetail 16 часов назад

      Glad to hear that! Thanks for the feedback!

  • @JohnICGomes
    @JohnICGomes 22 часа назад

    Great video and awesome web app. Would be great to make this into a iOS app.

  • @deepskydetail
    @deepskydetail День назад

    When working on this video, I talked to @CuivTheLazyGeek a bit, and he was really kind and informative about discussing filters (one of the things that could have affected the previous video's results). While I don't specifically talk about filters in this video, the conversation was super helpful, and I may discuss more about filters in a future video. I highly recommend you watch his reviews of the various filters he tests! Also (!) @williamhedges1519 with Cosmic Film Studios has a YT channel that provided the green screen spaceship model video used in my introduction. He lets anyone use it (based on the video description). He does a really cool "Cosmic Cat" series that I enjoy. I thought I'd give a shoutout. The greenscreen stuff is here: ruclips.net/video/r1oQ0LBQlq0/видео.html Cosmic Cat Episode 1 is here: ruclips.net/video/uQBOTuzaFYg/видео.html

  • @TheUncaDaveyShow
    @TheUncaDaveyShow День назад

    First time to your channel. Subscribed. Loved the laser/fishing line demo! So... when you integrate that diffracted output image from 0 to 180deg(or 360?), then your result is the airy disk. ?? So obvious now but never "got it" till I saw your little experiment. thx!

    • @deepskydetail
      @deepskydetail День назад

      Thanks for the sub! I appreciate it! I'm glad you liked the demo. When I saw that demo as I was preparing to make the video, it clicked with me too. There's so much I learn as I'm doing this that I think, "I've got to share this! It's really cool!" :)

  • @DylanODonnell
    @DylanODonnell 3 дня назад

    Just wanted to let you know I shout you out in my latest video that will come out today. Thanks for your great work!

    • @deepskydetail
      @deepskydetail 3 дня назад

      Thanks for the shoutout! I appreciate it :)

  • @jamespeirce2582
    @jamespeirce2582 4 дня назад

    This is a strange video, sir. The concept you seem to be discovering here is called “sampling” (see also over- and under-sampling) and it represents a trade-off in potential resolution of detail relative to signal vs noise. And you can shift this ratio with binning, or, in post-processing, by using a tool like Integer Resample in PixInsight. No real downside here… but rather more flexibility. The 294’s case is kinda shot by its dark current and the complexity in getting a truly clean (not just looks clean) calibration with narrowband long exposures. I’ve not been able to get better signal vs noise from my 294MM vs my 2600MM, bin2 on the 294MM (i.e. grouping the quad bayer sub-arrays) vs Bin1 on the 2600MM, despite this configuration also lending to the 294MM an advantage in larger “pixels.”

    • @deepskydetail
      @deepskydetail 4 дня назад

      Thanks for the comment. Could you please point in the video where I claim to be discovering something new 😉 Around 11:05-11:40 I talk specifically about sampling and the tradeoffs of resolution/sensitivity. The point of the video is to figure out why some of my viewers were having a hard time getting good SNR with the camera. It didn't start out as a video about sampling. Sampling just came about naturally as I was looking at the data from the two cameras' images. It's interesting about your comparison of the 294 and the 2600. If you have any data you'd like to share, I'd be more than happy to take a look!

  • @EricMilewski
    @EricMilewski 5 дней назад

    wow! - interesting Thanks for the info 👍

  • @jeremybinagia
    @jeremybinagia 5 дней назад

    Hi, thanks so much for making this app and explaining it so clearly! I tried it with my OSC + narrowband filter setup but I was running into trouble. I was wondering if any of the fields require special treatment if using an OSC camera and/or narrowband filters? Or perhaps if the app is just meant for mono?

    • @deepskydetail
      @deepskydetail 4 дня назад

      You can use it for OSC cameras too. I would just use the average brightness of the RGB channels (however you want to do it).

  • @dummag4126
    @dummag4126 8 дней назад

    I use an old raspberry pi4 with an indi server running on it. I don't care about its os, I just use it as a server, outside in the cold the 2 ccds, the mount, wheels, weather are connected to the pi. And at home on the desktop (linux) I have several clients: Kstar, Stellarium, Pixinsight...etc. I save the frames at home on the hd. I actually prefer indigo, but it doesn't work with the Kstar/ekos client. I'm using the Stellarmate server, but astroberry is also fine.

  • @davidphillipson7256
    @davidphillipson7256 8 дней назад

    all really good and useful except the annoying music, not necessary really

  • @rehon101
    @rehon101 9 дней назад

    is it on Mac?

    • @deepskydetail
      @deepskydetail 4 дня назад

      Yes, instructions for how to set it up are here: ruclips.net/video/IRe-d-Izo5o/видео.htmlsi=CPz-ws5aYLzGhCXs&t=691

    • @rehon101
      @rehon101 4 дня назад

      @@deepskydetail thanks

  • @DaPablol
    @DaPablol 19 дней назад

    That opening goes HARD!

  • @hmuphilly9129
    @hmuphilly9129 19 дней назад

    So does the moon have a h alpha filter? Lol

  • @hmuphilly9129
    @hmuphilly9129 19 дней назад

    There’s scumbags anywhere online

  • @stephenjones7474
    @stephenjones7474 19 дней назад

    What prior distribution did you use for your Bayesian model. Also was there the same number of replicates for each of the combinations or was the total for the stacked image kept the same? The latter may impact the precision of the estimate of SNR for the 120s exposure combinations as you only have 20 reps per combo vs 160 per 15s combo . Thank you for your video. Steve

    • @deepskydetail
      @deepskydetail 18 дней назад

      Good questions! Based on what I understand from JASPs documentation, the underlying distribution is just a normal distribution. Other distributions (e.g., ex-gaussian or gamma) might be better, but looking at the overall histograms, a normal is a decent approximation (I'm also a bit lazy and didn't want to code things in jags or stan assuming different priors). There were the same number of samples/replicates for each combination.

  • @Sharpless2
    @Sharpless2 20 дней назад

    It is really funny how many people, especially on cloudynights, no matter if youngling or old grunt, try to push the "total integration matters more than sub exposure time" myth. I mean, i can clearly see and measure an absolutely astronomical difference in a single 300" sub vs a single 180" sub of NGC281. If your longer test subs already look better than your shorter test subs, shouldnt it be obvious that the old intergration time myth is false?

    • @deepskydetail
      @deepskydetail 18 дней назад

      Thanks for the comment :) I think the benefit of going longer is most apparent in really faint areas. You help push down the noise in those areas, which really helps! Although stacking definitely helps with exposure times of all types!

  • @janeclark1881
    @janeclark1881 21 день назад

    I use colour cameras. I had the ASI294MC and traded up to the 2600MC. For photographing galaxies, my exposure time went right down, and the quality went right up. I'm not really a nebula photographer, so my life is not dominated by H-alpha. I also see the bigger chip size as a significant benefit of CMOS over CCD.

    • @deepskydetail
      @deepskydetail 20 дней назад

      Thanks for the comment! The bigger for is amazing, I think!

  • @Ptolemusa
    @Ptolemusa 22 дня назад

    Would be interesting to see the experiment include the super short exposures, like sub second ones. And super high gain, max gain or near max. Like what the result would be if you essentially filmed the DSO at max gain for an hour. I don't think the result would be good, but i'd like to see it. Might have to try it myself.

    • @deepskydetail
      @deepskydetail 22 дня назад

      That would be interesting! If you do it, let me know!

  • @siegfriedberger7009
    @siegfriedberger7009 23 дня назад

    Hi Mark, just a question about bigger pixels ... You also have an ASI294mm Pro with even smaller pixels but a good QE!? Please comment about the 294MM which i also have and love!

    • @deepskydetail
      @deepskydetail 23 дня назад

      I'm working on a video about the 294 right now as a follow up. Stay tuned!

  • @ekalbkr
    @ekalbkr 24 дня назад

    Thanks! While my higher math skills are somewhat rudimentary (maybe I could benefit from Brilliant😊), I understand enough to appreciate the approach. This and another recent video got me to subscribe. Real numbers matter - even imaginary ones!

    • @deepskydetail
      @deepskydetail 24 дня назад

      Thanks! Glad you've come aboard :)

  • @MastodonRockss
    @MastodonRockss 25 дней назад

    If you look at scientific grade cameras most of them have substantially larger pixels than the modern consumer-grade cmos chip. I believe its for this reason. They're less worried about resolution and more about SNR.

    • @deepskydetail
      @deepskydetail 25 дней назад

      I think that makes sense, to me. Also, a lot of professional telescopes have quite long focal lengths. Bigger pixels also help to match the resolution. I think you're 100% right that pixel size is important to getting good SNR at long focal ratios and large aperture! It reminds me of an astro imaging channel live where they talked about why you'd want bigger aperture scopes.

    • @MastodonRockss
      @MastodonRockss 25 дней назад

      Yeah, thinking of a planewave or something like it.

  • @ekalbkr
    @ekalbkr 26 дней назад

    I'll take your advice as directed: with a grain of salt. Yes, binning can help, but typically only if your calculated arc"/pixel falls below about 1/1. I do find that on both my 72mm f4.9 and 102mm f7 units, a dither and drizzle (2x) provides a dense, nicely resolved image, albeit with the occasional artifact. As for CCD vs CMOS, I've only compared it with non-astro cameras, but found the CCD sensor to offer a bit more color contrast. As a result, my ancient Canon 8 M pixel S95, has become my favorite street camera, over a larger 1" CMOS unit. Though that comment is outside the scope of your video, I think it may point to agreement with your findings....

    • @deepskydetail
      @deepskydetail 25 дней назад

      That's pretty interesting about your Canon camera. And of course, binning isn't always the best plan of action, depending on the pixel scale. Thanks for the comment!

  • @Neanderthal75
    @Neanderthal75 26 дней назад

    Hello! There has been some heated discussions related to QE and sensitivity graphs. If you look at the newest small chip, the IMX585 it shows a much greater sensitivity to Ha and near IR than the 533 or 2600. Granted, it's a small sensor, but it may be capturing more than the others, while being smaller.

  • @SuperibyP
    @SuperibyP 27 дней назад

    Hate to be that guy, but what's the track that starts at about 7:13 or so? I've heard it a few times and I really like it!

    • @deepskydetail
      @deepskydetail 27 дней назад

      Be that guy! No shame :) This one? ruclips.net/video/T9IXodtjRgs/видео.html Heaven and Hell by Jeremy Blake (it's in the YT music library)

  • @giovannipaglioli2302
    @giovannipaglioli2302 27 дней назад

    Hi and thanks for this video! Well done! I would like to add something to Your considerations and I hope is useful to solve the "mystery". As You stated there is a different quantum efficiency for a given wavelenght and we can see that in the Q.E. curve of the sensors. There is too much hype, on my opinion, reguard to the tech specs of the cameras, specially on the readout noise side. You can only compare two sensors that are sampling the same angle of the sky per pixel since the Photometry is ruling and not the system. A fixed flux of photons per second per angle considered exist and You can compare different systems only on that same angle of sampling. The old CCD's used to have a much more precise electronics and it was a single point of lecture of the datas. Today's CMOS have a DA converter for each pixel and they are not engineered for "precise" measurement but for fast reading. When You look at the many graphs published for the CMOS, they enrich the low readout noise of the CMOS compared to the CCD's but it is really true? CCD's used to have a fixed gain, calculated for to optimize dynamics. Nowdays changeable gains on CMOS have the effect to "change the readout noise" and that is very low compared to the CCD's. The problem in changing the gain is that You count less and less elctrons in a bigger and bigger block. Since the full well is not changeable and it depends on the intrinsic physics of the silica, the bigger the block, the less the precision of the lecture and the smaller is the usable full well. Consider a CCD with a 5e readout noise on a 50k full well, it is 0.01% of "noise" better to say uncertanty while 3e on a 10k full well is 0.03% of noise, three times more! Another thing to consider is the declared full well... In reality it only depends of the area of the silica used for the photoelectric effect. a 3.7x3.7 microns area of silica, even if it is perfect can't retain more than 12/16ke, this is physics not adverticement... Mabye tha CMOS has a memory and it reads the pixel when is closing to be full and retain the information for the final sum (buffering) but it is not true full well. In the end we are using sensors which are not even colose engineered for precise measurement like the CCDs but for different industrial applications. We are using these sensors becouse they are cheap compared to CCDs and also 'cos they are no more producing CCDs. in the end is not the absolute number You have to look but the percentage and the precision of the readed data. The uncertanty is "noise" in digital since we are no more taking an image, we are making a measure instead and how precise is it is the SNR which translate in contrast when we chose to represent these datas as an image. Remember tha they are just numbers in the digital world or "datels" and You can play them as a "sound" if You will, they will sound like an horrible noise to Your ears but they still are EXACTLY Your datas!

    • @deepskydetail
      @deepskydetail 27 дней назад

      Thanks for the comment! I think there is some great information here. I agree that the angle and sampling of the sky are super important here, and it's a major problem in comparing the two images. I'm in the process of doing more rigorous tests, and all the conclusions in this video are very tentative! About the CMOS vs. CCD, this is really good info. Would you have some sources for me to look at and study this more? I've never thinking about the read noise to well depth ratio before, and it make sense on an intuitive level. Let's see if I'm understanding this: If you expose until the maximum without overexposing, you're read noise as a percentage of the actual signal is less than that of a CMOS camera? That's because in order to get the low read noise value in a CMOS camera, you're actually losing dynamic range, decreasing the full well depth? Anecdotally, one thing I noticed about my Starlight Xpress 825 was that it does seem to be less noisy than my 294MM, even though the read noise is greater in the CCD. I just felt that the images were smoother somehow, but I might be wrong, and it might be due to pixel size. Much to learn, and if you have some sites/books/articles to share, please do! You can always email me at deepskydetail at gmail.com!

    • @giovannipaglioli2302
      @giovannipaglioli2302 23 дня назад

      @@deepskydetail Thanks alot! It was my only purpose to stimulate curiosity about that and I hope it useful... On the first I would like to point out that we are no more "taking pictures" but "making measurements" whit digital cameras. Our intent is to measure, as best as possible, the intensity of the energy (emitted in the form of photons) of a subject. Since our camera is an array of "micro-sensors" that we call pixels even if a pixel is something different in digital world. Anyway we can only make a measure of the total energy that falls in to a single pixel, per every pixel in a given time. We chose to represent these different energies in form of light reproduced from a monitor i.e. and this is quite obvious since we are capturing an emitted "light". Anyway this point out that, in a perfect world, we can only get that total energy and nothing more, no matter the system; in this case our best result is to get that exact energy. When we take such "measurements" we get some kind of errors in the process that alters the "real" data we want to capture. What we call "noise" in digital is exactly that, How much datas is certain of the source we intend to measure and what is not? That is the SNR which is more correct to call: certain Vs. uncertain. Readout noise is part of the electronic noise and is something related to the system that we can't measure or take out from the equation since is not fixed. When we say 5e of readnoise it can be +5 to -5 but can even be 0! The fact is that is different each time and we havo no way to measure it. Other form of shot noise we can deal with in some forms, i.e. dark current noise. We can take dark frames and then subtract these from the image to "correct" the values from the measurements but we can't remove the uncertanty of the readnoise from the dark frames too so another small error is contributing the final measurement (that's why the professionals use cryogenic cooling to cameras and they not take flats at all). Anyway the main difference from CCD's and CMOS is they're purpose, the first ones (CCD's where been invented to be memories for the computer at the beginning) where made for taking precise measurements and they involve complex electronics and clocking systems to work. For this reason they needs alot of energy to work. CMOS on the other end where born for industrial uses and they didn't need high degree of precision. At first they where used as counters or triggers for industrial machines. imagine a conveyor belt with a lot of cans of Campbell's soup, a CMOS underneth the belt could see the darkening of the lights for each can passing and count them. These kind of applications where the ones intended for CMOS use. Today we are too fiew users for the producing companies to think about making a CMOS sensor as precise as a CCD for measurements. We could be 1000000 people but they make 8 billions CMOS per year just for the mobile phone industry! Anyway we are getting smarter since we don't really take pictures anymore but we make statistics instead! That's why we are getting better and better even with sensors that are not made for our intent. It is important to underestand that the readnoise is an error we can't remove but we can bring to a certain value that we are confortable to accept and this is true for all the total noise that we can measure or deal with in some ways. As a fixed number in a range it is indeed a percentage and, higher full wells means that this is in a smaller and smaller percentage, the uncertanty on very low full wells becomes higher and higher as a percentage of the whole signal. We are adapting ourselves to smaller pixel sizes and this becomes even convenient for us 'cos we need shorter focal lenght to achieve the same spatial resolution per pixel respect to CCD's but at the cost of precision of the data and full well. We don't have the chance to know what any CMOS converter do behind the lines since this is an industrial secret, i.e. we nowdays take flats just for correcting the "vignetting" of the system but, whit CCD's we are also correcting variations of sensitivity or discrepancies from one pixel to another! This becouse the CMOS still have a map inside the electronics that correct this value but: how much is precise that correction? We don't know and we cannot change that. All these kind of trickery was not present at all in the CCD's. In the end that's mainly why Your CCD's frames seems more "smooth" to You, becouse they are! The data has an higher degree of precision. Sorry, I wrote too much... Anyway these are two great sources to read if You will: CMOS/CCD Sensors and Camera Systems, Second Edition. Author(s): Gerald C. Holst, Terrence S. Lomheim Photon Transfer Author(s): James R. Janesick

    • @giovannipaglioli2302
      @giovannipaglioli2302 22 дня назад

      @@deepskydetail Sorry but I would like to make another, mabye useful, consideration... There is another difference between CCDs and nowdays CMOS and is the quantization noise. The CCD has a true 16bit converter while all actual CMOS are double reading 14bit that means they read the data twice ( and I don't know if double readings is also doubling the readout noise...) with a different gain converter and interpolate the results to "obtain" a 16bit approximation. Try to shot a good dynamic subject with large light differences with the same scope, sampling angle and same lenght single exposure that is quite long i.e. 300secs (And You will probably going to saturate the CMOS not becouse is much sensitive but 'cos it has a smalle full well) with CCD and CMOS and than look at the histogram... The CMOS one will be much "smaller" and "spiky" than the CCD one. Try then to increase the non linearity of the representation on fixed steps on both images and You will see that, in the CCD, You will retain more and more "tones" and capacity of distinguish more light differences while the CMOS will "appear" to be in a very small area with less tones to show. This is another effect of the precision of lecture of the datas...

    • @deepskydetail
      @deepskydetail 20 дней назад

      Thanks for the info and sources! I now really want to test those ideas, like comparing the histograms. Sounds like some ideas for new videos ;) A also checked your astrobin images, and was blown away! Great images!!

  • @redrocklead
    @redrocklead 29 дней назад

    My ccd cameras had so much less noise. Cmos sux.

    • @deepskydetail
      @deepskydetail 29 дней назад

      What cameras do/did you have if you don't mind me asking? I think that generally the newer CMOS cameras have less read noise. But the CCDs tended to have bigger pixels.

    • @redrocklead
      @redrocklead 29 дней назад

      @deepskydetail I'm sorry, I should have been clear. First digital cameras from fujifilm. They switched to cmos for costs. Hopefully my new move shoot move will help.

  • @DrunkferretKG
    @DrunkferretKG 29 дней назад

    I have both zwo 533's. I've noticed my biggest hit I take is for S2. I had a 294mc pro before the 533's and I can see the difference. I am wanting a 294mm pro so I get two pixel sizes effectively. I can't wait to see a 47M pixel Ha sub from one of my small refractors.

    • @deepskydetail
      @deepskydetail 29 дней назад

      Yeah, the qe for these chips (e.g., 533, 2600 etc) do drop off quite a bit. SII is around 50% qe. You can still have two pixel sizes with your 533 though by binning! Bin 2 mode on the 294MM I think is just 2x2 binning from what I understand (with some gain setting manipulation going on!). Also, seeing and aperture size also influence resolution! Let me know if you get the full 47M (I've actually never tried it myself; I generally stick to bin2 mode)!

  • @nikaxstrophotography
    @nikaxstrophotography 29 дней назад

    Great video Hope you heal well sooner than later.

  • @v0ldy54
    @v0ldy54 Месяц назад

    I don't think it's the pixel size, pixel size doesn't impact the image's overall SNR if you're only counting shot noise since the light gathered is the same. That's why if you pick two mirrorless with different megapixel count and same sensor sizes they'll perform very similarly, except for a small advantage to the low mpx camera due to less sources of read noise. I think there are too many variables between the two systems to draw conclusions, the test should be done with the same setup. Edit: also, binning has zero benefit to a CMOS image SNR, it works on a CCD because the charge of a 2x2 group of pixels is added and physically measured just once so it reduces read noise, while in the CMOS you're still reading every single pixel and just averaging the values mathematically after,.so not only there is no benefit to the SNR but you're also throwing away resolution.

    • @deepskydetail
      @deepskydetail Месяц назад

      Thanks for the comment! Some responses: 1) Pixel size does matter as well. Let's consider just the light gathering ability, as you mention. -Let's say a given area on the imaging circle gets 4 photons of light on average each sub-exposure. If there is only one pixel (and of course only considering shot noise and a perfect imaging setup), the SNR is going to be 4/sqrt(4) or 2. The average SNR of the pixel is 2 (2 divided by 1 pixel). If there are 4 pixels, then the each pixel will have an SNR of 1 (signal = 1, noise = sqrt(1), SNR = 1). The average SNR of the four pixels is 1 (1+1+1+1 divided by 4 = 4/4 = 1). The 4 pixels are going to be noisier than the 1 big one. Instead of 1 big pixel with 1 value, you have 4 small pixels with slightly different values. That will look noisier. 2) I think there are too many variables between the two systems to draw conclusions, the test should be done with the same setup. -I agree! I hope in the future to do follow up tests. I tried to make it clear that the conclusions and test is flawed and I need more data. 3) binning has zero benefit to a CMOS image SNR This is, as far as I know, not true. You still get the benefit of adding together the signal. However, as you mention you add in the read noise of all the pixels instead of just one pixel. But read noise in newer cameras are pretty small, so you still benefit like you do with CCD binning. Altair has a good explanation here about CMOS binning: www.altairastro.help/info-instructions/cmos/how-does-binning-work-with-cmos-cameras/

    • @v0ldy54
      @v0ldy54 29 дней назад

      @@deepskydetail afaik since they're 4 independent and unrelated sources of noise, where you're averaging shot noise from adjacent pixels you should sum thoise noise sources in quadrature. That means that a single pixel gets 4 signal/√4 noise =2. 4 pixels with 1 noise each will get 1*4 / √(1²*4) = 2. It's the same, which makes sense since shot noise is only dependent on the amount of light and nothing else. Still works for bigger numbers (just pointing out since 1 squared might look weird): with 100 signal on 1 pixel you get √100= 10 noise, with 25 signal on 4 pixels (each with 5 shot noise) you get 25*4/√( 5² *4) ) = 10. Again, you can check for example on the DPReview comparison that you have very small difference between an A7RIV with 60mpx and an A7III which only has 24, if shot noise changed the result would be vastly different (I'll link in another comment in case stupid RUclips blocks the link). The problem with CMOS binning is that you're not gaining a real benefit in SNR, mathematically it's exactly the same as simply reducing the image size, which can be done in post after you stacked the full resolution files, so there is no reason to bin a CMOS unless you need faster transfer speed during acquisition (which is definitely not the case for deep sky astro

    • @v0ldy54
      @v0ldy54 29 дней назад

      @@deepskydetail www.dpreview.com/reviews/image-comparison?attr18=lowlight&attr13_0=sony_a7iv&attr13_1=sony_a7iii&attr13_2=sony_a7riii&attr13_3=sony_a7riv&attr15_0=raw&attr15_1=raw&attr15_2=raw&attr15_3=raw&attr16_0=6400&attr16_1=6400&attr16_2=6400&attr16_3=3200&attr126_2=1&attr199_3=1&normalization=compare&widget=1&x=0.086367625548513&y=-0.14081228556828976 Here is the comparison, of course the A7III wins because read noise is lower, but that's not even a single sotp difference even with pixels that are almost 3 times smaller in area.

  • @Microtonal_Cats
    @Microtonal_Cats Месяц назад

    "My arm is in a sling because some guy at the bar said 'Zwo cams are best', and I had to teach the brute a lesson. Sure, my arm got broken, but you should see HIM."

    • @deepskydetail
      @deepskydetail Месяц назад

      lol!😂 I would definitely be the one getting the short end of the stick in that scenario!

  • @old_photons
    @old_photons Месяц назад

    Really enjoying your videos for some time now. It is great when you are quick to disclose when comparisons aren't fair, or the analysis flawed. Your conclusions are carefully weighted. Best of luck with your shoulder recovery.

  • @AstroAF
    @AstroAF Месяц назад

    Really interesting subject Mark! Glad to see you’re well and on the mend! Cheers! Doug

  • @Ben_Stewart
    @Ben_Stewart Месяц назад

    I had the ASI294MM and although I really liked it for its versatility 2X2 bin etc. The flats were a bit of nightmare. Variable ADU at short exposures really hurt this camera. I ended up selling it and sticking with my trusted 2600MC. I really want to get back into mono but not sure I want APS-C or the bigger FF. With the new European USB-C rules I wonder if ZWO will be forced to re-engineer the cameras.

    • @deepskydetail
      @deepskydetail Месяц назад

      I do enjoy my 294mm too. But I agree that flats with the 294MM's flats are difficult. It's hard to get longer flat exposure times, especially in L. A lot of times, I end up having to redo them!

    • @GoldenJackalTutorial
      @GoldenJackalTutorial 7 дней назад

      @Ben_Stewart @deepskydetail Don't stress too much over the flats, find a flat panel with extremely dim setting, set NINA to the recommended ADU value with a maximum deviation of 2% and let the app shoot your flats with dynamic exposure. Mine are around 10s of exposure. Guess what, they correct the lights so well that the SPCC in pixinsight sometimes doesn't even do much since the channels are already properly corrected by the flats. This camera indeed goes nuts at short flats or bias frames, so f that, give her what she needs: long flats and dark flats instead of bias frames. Been using it for over an year and I love the results. The small pixels start becoming a little problematic at my 1200mm focal length, but so far deconvolution does take care of the oversampling pretty well, boosting the sharpness of the image. I would not recommend this camera to an even deeper focal length tho.

  • @stevenmiller5452
    @stevenmiller5452 Месяц назад

    I appreciate you doing these comparisons. However, I think you are conflating pixel SNR with target SNR or more precisely SNR per fixed unit of sky area. Yes a smaller pixel camera will have a lower SNR per pixel, but if it has four times as many pixels, you can create an equivalent with a camera with 1/4 of the number of pixels because four pixels can be averaged together, and it will double the SNR at the same pixel scale of the camera with larger pixels. This is an extremely important point: you need to normalize to fixed pixel scale. Finally comparing examples that are taken on different nights or with different equipment is really difficult because sky clarity can change dramatically. I’ve had examples where the sky clarity seemed somewhat similar, but the single noise ratio from two different nights was almost 2X different. Target altitude and moon phase are also very important factors as well as the background light pollution. And finally when comparing different filters, it’s not just the width of the filter but different filters have different peak band passes at Ha, although I admit this factor is typically a 10 to 15% difference but still that’s a difference. The biggest two differences are that you need to normalize for a fixed area of the sky or fix pixel scale and if you’re gonna compare two different captures, they need to be done with the same equipment on the same evening.

    • @deepskydetail
      @deepskydetail Месяц назад

      Great comment! About the different setups/nights, I completely agree with you, as I stated a few times in the video, the comparison is quite flawed and I really want more data do test things with. That being said, most of the variables that I do have data for related to the equipment, moon phases etc. are in the viewer's favor. About the image scale, I also agree with you, which is why I mentioned binning and the tradeoffs with resolution at the end of the video ;) Thanks!

    • @stevenmiller5452
      @stevenmiller5452 Месяц назад

      @@deepskydetailThanks for the response. Perhaps you can go back and normalize for pixel scale and see how that impacts your results and update your video to keep it as up to date with your latest thinking as possible. Bummer about your injury, I hope you heal quickly. I think this is critical because I see this being missed by many people and even web sites. I think it would be a huge service to the AP community for them to know that in order to compare systems you need to compare at equivalent pixel scale.

    • @deepskydetail
      @deepskydetail Месяц назад

      That's a good idea!! If I were to guess, I'd think that the results will show that the SNR per unit area is very similar for both cameras (one has slightly higher qe, the other has lower read noise). I also think that with digital images, the overall SNR of the image will change how good it looks, and pixel size is one (out of many) important factors to consider in the overall SNR. Even if the SNR per unit area is the same, an image with bigger pixels might get better overall image SNR faster than one with smaller pixels, which of course is why binning might be considered. Sorry for the rambling, but I guess what I'm saying is an image using bigger pixels, all things equal, will get a smoother looking image faster than one with smaller pixels. It's the human perception at the end of the day that will make the judgment. The tradeoff, of course, is resolution (i.e. start zooming in on the bigger pixel image, and things might start getting blocky).

    • @stevenmiller5452
      @stevenmiller5452 Месяц назад

      @@deepskydetail actually this has been debated on cloudynights extensively, and the general consensus is that you can always downscale the higher resolution image to the equivalent pixel scale and improve the SNR, even after stacking, and so there is really no advantage of the larger pixel scale camera unless you are suffering from read noise. The only reason a larger pixel scale camera is better is to swamp read noise more quickly but as you know, CMOS cameras have very low noise so this is unlikely to be true in this case even with smaller pixels. I think if you wrk the math you’ll find this to be true. So really the main difference is down to Ha sensitivity.

    • @deepskydetail
      @deepskydetail Месяц назад

      @stevenmiller5452 I see what you're saying, and I agree! That does make sense the way you've explained it. The thing is, generally, the people who have contacted me using these CMOS cameras aren't binning, downsampling etc., and they are wondering why the SNR is so bad (and consequently why it takes them so long to get an image they want). They could downsample. They could bin. And that's ok! It's in the video as a solution! But their expectation (based on what I think is marketing) is "the cameras should be faster" by default (i.e., without binning/downsampling), and they're worried something is wrong (when it really isn't).