To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/cuivlazygeek . You’ll also get 20% off an annual premium subscription. My Patreon: www.patreon.com/cuivlazygeek My Merch Store: cuiv.myspreadshop.com/ Link to the 6um pixels camera: tinyurl.com/yjwd836e Link to the 2.9um pixels camera: tinyurl.com/4y9dz774 RedCat51: bit.ly/48hyuVx (Agena) or bit.ly/48pTWXW (HPS) Amazon affiliate: amzn.to/49XTx01 Agena affiliate: bit.ly/3Om0hNG High Point Scientific affiliate: bit.ly/3lReu8R First Light Optics affiliate: tinyurl.com/yxd2jkr2 All-Star Telescope affiliate: bit.ly/3SCgVbV Astroshop eu Affiliate: tinyurl.com/2vafkax8 Lukomatico's video: ruclips.net/video/V2L6bmr8nuA/видео.html
Oh Cuiv, you're awesome! I love your comment toward the end of the video, which is to start by thinking about the FOV and resolution you want to achieve - i.e. what kind of targets are you planning to shoot - then work back from there to determine your ideal configuration. Even the practical aspects of how much time you can consistently get on target (due to weather in your area or the fact that you have to pack-in/pack-out each time) has an important impact. These factors are so much more important to *start with* than "is this rig better than that," etc. For me in L.A., shooting small-to-medium nebulae and larger galaxies over limited periods of time, a 6" Newt with smallish, low-noise OSC sensor works great.
You definitely have the best astrophotography channel. The fact that you give so much time to explain (and do) all that stuff is proportional to your dedication and knowledge of astrophotography raised in the n power. Definitely my fav channel relative to such a content.
Forgeting the math, what I understod: 1 - Matching the goal to the equipements and seeing conditions are fundamental to get the best results from a given set of equipaments; 2 - More light and resolution power comes form bigger apertures, bigger objective brings more light; 3 - Focal lenght impacts field of view; 4 - Smaller sensors will have bigger magnification and smaller field of view. Small sensor looks good for small bright objects, and with bright objets, small pixels are not a problem; 5 - Bigger sensors will have lower magnification and bigger field of view. Bigger sensors looks good for bigger faint objects, and with faint objets, bigger pixels are important; 6 - For faint objects long exposure is needed and long exposure calls for a good guiding; 7 - A good workflow and post processing is fundamental to get better results.
Good summary overall! Although remember that if a large and a small sensor have the same pixel size, you could always crop the images of the large sensor to accomplish exactly the same results as the small sensor :)
@@JuanGutiérrez_cl I think some thing like taking flats, darks and bias, taking images from objects high in the sky, archiving everything in a organized way, using a standard naming for files and folders, making sure all is collimated, clear, in focus and aligned, testing your mount for guiding errors. After the image session, choosing the best images for processing, and so on. In other words it is like to make a task list to be sure you always does every thing that has to be done to get the best from your equipment every time. I'm learning but I don't have equipment to do images yet, so this all comes from reading and watching videos, so take this into account.
Great video! I had a couple of comments: 1) With respect to QE in various formulas (etendue, etc.), don't just copy the listed QE value from your camera manufacturer's site directly. This value is usually actually "peak QE," so the maximum quantum efficiency over the whole spectrum of light. You probably don't care about that specific wavelength, so its probably better to look at the QE graph and determine the relevant QE for a wavelength you're actually imaging, like Halpha. 2) On software binning - one thing not frequently mentioned is that smaller pixel sizes generally tend to have worse fill factor (that is, what percentage of area of the square that they occupy is actually photosensitive). Modern sensors try to mitigate this with microlenses, but this sometimes results in suboptimal QE across the whole spectrum compared to a larger pixel. If you're imaging in a weird wavelength or have some specific scientific need, larger pixels are probably a good idea. I suspect this part of why lots of professional telescopes use giant 9 and 12 micron pixels compared to the 3-5 range we use for our gear.
Both good points! I didn't want to go in the QE curves since I show that in other videos, and I do allude to the loss to the "borders" of pixels in subtitles, which indeed are worse with small pixels!
I made the mistake of chasing speed. I found that quality optics, quality mount, and properly matched camera to the optics is what matters most. So now I shoot mostly with APO refractors. Yes it takes longer to get the desired SNR for a good image, but the results are far superior than what I got from an SCT or newt with similar SNR after processing. Better stars, better contrast, and in some cases better details despite smaller apertures being used.
One other thing I would add is that fast systems tend to be the hardest to use. Extremely fast lenses are bound to have optical aberrations, sometimes beyond the ability to fix. So while you are capturing a ton of signal, your image might not be so good. People should consider overall optical quality when buying a scope.
As an astrophotographer moving from beginner to amateur, and looking to upgrade my beginner rig. I think this will help me out as soon as I can fully understand it lol. I did not know about all the different little factors that affected the signal coming into the camera. I use a little Cannon EOS M200 and just upgraded my mount from a SAM to an AM5, so telescope and cameras are next. This video will definitely help me understand those purchases a bit better when I can make them. So I appreciate the effort that you put into explaining this!
@@CuivTheLazyGeekI'm in a dark area, I can't figure all this stuff out. Why can't I just get an app that I can enter in my equipment? Of course if I have already bought it all I suppose the trick then is to make sure I'm using the perfect settings. But then even if I'm using the perfect settings..... How do I know if I'm doing the perfect stacking? Maybe I should throw away half my subs? Maybe I should only use 5% of my subs? And what about calibration frames? I read that I should take my flat frames at the same focus point as my lights. But I also read that as the temp changes during the night, I should be changing my focus. Ok so now I take 30 Ha frames and I end up with 5 different focus positions during the night. Morning comes. I have a Moonlite focuser. So I know exactly the focus position of each handful of subs. So now I have a data set from one night that I took 3 narrowband filters of data, each with multi focus positions and all with different flats. All use the same darks and biases. And I do this for a week. This i did in this last week. We had super clear nights with great seeing and perfect transparency. First time it's happened for me in three years. I got 34 hours of total exposure time. How the hell do I combine all the calibration frames and stack my images properly??? Is there a way to get NINA to use the flats wizard to read the focus positions from the file name and shoot the flats at the right focus positions? The Moonlite focuser has zero backlash so I'm certain I get repeatable focus positions. There are so many variables I really wish AI could figure out using my equipment specs and location and weather and atmospherics to tell me what gain and offset and exposure lengths to set Nina at. There has to be a way to figure all this out if it's so easy to mathematically figure it out. Cuiv, work your magic in Nina. You're our only hope .
@@christopherleveck6835 First things first! Flats don't care about small (and even somewhat large) focus changes (I have a video explaining why), so don't worry about that!
I was really looking for an a physical interpretation of those parameters in order to understand what is good and what is bad. I had to make many pauses to think about what you are saying. I absolutely need to bookmark this video because I will have to come back to it later. Many thanks for your pedestrian tutorial.
This video is awesome! I didn't see this when it was published (because of my arm; and I just got done making a similar video!). Just one comment: If you double the SNR of a scope, then it will take 1/4 the amount of time to get the same SNR because SNR follows a square root function. So it really is 4x faster.
Aperture always wins, the focal ratio is just the outcome of how much angular field you want to cram onto a given sized sensor. What everyone misses is how critical focus, collimation, and tracking accuracy is to detecting faint objects. If you can lower your FWHM from say 3.5 arc seconds to 2.5 it's about one magnitude fainter limit of detection. The other thing I think is worth pointing out is that spot size (the physical size of the Airy disk on the sensor) depends ONLY on focal ratio; it's about 5 microns at f/4, so at f/2 you can't even sample the diffracted spot with most cameras and in fact the RASA design isn't even diffraction limited (per the Celestron published spot diagrams). People get way too hung up on this and I don't buy this concept of pixel entendue. It doesn't mean anything. The whole point of it was to talk about productivity, meaning you want to survey X square degrees of sky down to Y limiting magnitude in the minimum amount of time. The RASA11 will do about six square degrees to 20.5 magnitude in about six minutes; that's its productivity, but a f/10 telescope will cover 0.1 square degrees to the same limiting magnitude in the same time. So when you are paying for a "fast" optic you should be paying for the max size sensor that goes with it because you want the highest productivity (square degrees of sky image output per hour of imaging) not because you want to image some small target in less time than the slower system.
Not nearly as simple as you make it seem. Yes your want the maximum sensor size to fit the image circle (allowing for edge roll off) to maximize productivity, but there are a ton of parameters to consider if you want to survey objects down to magnitude -20.5 😅 That would have made this video hours long. 😅 Thanks Cuiv for cutting yourself off where you did!! * 😅
Hi Cuiv. I think for a lot of people a simple list with field of view/resolution and recommended focal ratio/ pixel size would be helpful. Like a check list. If I want to fotograph Andromeda with best details what telescope/camera is recommended. Thanks for the deep dive. Math was not too complicated 😅
Great talk Cuiv on an interesting subject. There has been a lot of talk on the forums about all of this. Another related issue is the difference between point sources (stars) and extended sources in how the final image is processed. Another good, recent, source is the talk on AIC "The Quest For Aperture: Why are big Telescopes better"
If I've summarized this video correctly in my head: 1) Have solid idea of the angular extent of the object(s) you wish to have in your field of view first. 2) Then (if money is no object) pick the camera (sensor) and focal length & aperture that is most efficient for acquiring the subject at the best possible resolution given the local seeing conditions. This is why having, as an astrophotographer, several different scopes (FL's & aperture combos) that enable a wide variety of subjects along with a number of different cameras to obtain the best possible images. Example: Wide field nebula's (NGC7000 with the Pelican Nebula or the entire Veil nebula complex) a short tube refractor with camera sporting the IMX 571 sensor (e.g. ZWO Asi 2600 MC/MM pro) or better will maximize detail while providing an image across the full field of view. While if imaging distant galaxies (e.g. not M31 or M33), an 8" or greater SCT, Newt, Mak-Newt etc.. with an IMX533 sensor (e.g. ZWO Asi 533 MM/MC pro) will enable a more restricted field of view around a subject such as M51 or M101 and allow resolution of some of the nebulosity in some of the spiral arms (i.e. small details).. Of course there is always Mosaic'ing but I digress. Did capture the essence and tensions correctly?
This is pretty good as a summary, although in the last step, you could use the IMX571 instead of the 533 and just crop! The two sensors are almost identical besides their size!
@@CuivTheLazyGeek I wrote IMX533 but I think I really meant the IMX585. You're right above as I believe the pixel scale is the same with IMX533 and IMX571 sensors, just one sensor is larger than the other..
Thanks Nick! Sorry I'm unable to express it more clearly - I'm planning on making a video of "how do telescopes work" with actual lens simulation software so hopefully that will help!
As you said, there are a whole lot more things that significantly impacts the imaging and final images, beside the differences between focal ratio. People tend to think: "I bought this fast telescope, so now I can capture images faster so I don't need so many clear skies anymore". That's only partially true. There is a whole other bag of issues that just opened up and there is plenty of other things to worry about, including but not limited to sensor capability (camera could be over exposing on bright objects) , unforseen or previously unseen optical aberrations , since now the small issues are magnified to be larger issues. The list goes on regarding light pollution vs fast optics or filter compatibility vs fast optics and so on. Nothing is simple is in this hobby, nothing is straightforward and it's one of those hobbies where A+B does not equal AB but something more complicated.
I'm interested in trying film astrophotography (I despise the processing side of the hobby, unfortunately...) which is essentially optimizing a single frame for SNR and exposure. It's an interesting puzzle that today's guiding tech could make much, much easier than it was 20 years ago. That said, with all of the starlink crap flying around now, I'll never be able to get a clear shot without satellite trails.
Great vid again Cuiv, merci! Sadly, it all start with the budget. In an ideal world, meaning I had 10k$ to spend, I would: 1- first select the resolution -> 1"/pixel to 2"/pixel 2- then select the bigger sensor that keeps me away from a lot of tilt issues -> APS-C (23.5 x 15.7mm, or 23'500um x 15'700um) 3- then select the field of view I want -> min 2° x 1.4° (ratio of APS-C) = 7200" x 5000" 4- Compute the max focal length I need: Fmax = 206 * 23'500um / 7200" = max 670mm 5- Compute the pixel size I need: P = resolution (1 to 2") * F / 206 -> I need P in between 3.2 and 6.4um 6- Choose the cam. imx571 sensor is APS-C with 3.76um pixels! -> 2k$!!! :( 7- Compute the min focal length I need: Fmin = P * 206 / resolution_max = 3.76um * 206 / 2" = min 390mm 8- Choose type of telescope where I can add data from one year to another -> avoid spikes on bright stars, no spiders to hold the secondary, no Newt 9- Choose type of telescope where not so good collimation doesn't affect much the image quality -> focal ratio min = min 4 10 - It leaves me with refractors, select the bigger size that weights less than 15kg -> 120mm -> 3k$!!! :( 11 - 120mm F7 refractors have focal length of 840mm, add a good focal reducer 0.7x that fully illuminate and APS-C-> 500$ 12 - choose a mount that can guide 20kg reliably -> 2-3k$ But if I have less than 5k$ to spend, all that beautiful plan disappears :(
Great exposition!! I saw Luke's video and wondered if binning made any sense, considering that the data collected by the sensor doesn't care if I have just one huge "fake" pixel or if it's split in smaller pixels. Now that you mention software binning, you answered my doubt. I really enjoy these nerdy videos. It's not the first time I say it: you make perfect sense of everything in a way that's not usually seen among the "astro-geek" RUclips community. To make things even better: I learned something. When comparing systems, I always compared signal gathering instead of SNR. Great food for thought, thanks!!
Fantastic video as always Cuiv! I love your more technical approach to explaining, it's always very interesting and you explain things really well. Thanks for your hard work!
Great video! A lot of stuff I'm having to learn about. So I'll be watching this video for several times to understand everything better.But this is very useful to me!👌🏻✨👍🏻 Thanks!!
Really good video, Cuiv. I remember I had some debate here with my colleagues, and the conclusion I've reached is the following, assuming some points: 1. we assume we will be using the same camera sensor for the comparison of two telescopes (so the same pixel size, quantum efficiency, read noise, thermal noise, and so on). 2. we assume that both telescopes have the same focal length, so the end pixel scale in arcsec/pixel is the same, and thus the field of view is the same Now, having this, we compare two telescopes with two different aperture sizes, but same focal length. Let's say 100mm aperture and 200mm aperture. My thinking is that the total efficiency of one's setup, meaning the efficiency the setup can "catch" light with certain sky condition, is only dependent of the aperture size. More diameter means more light coming in the telescope tube, which, in our example, "falls" on the same sensor. As the quantity of light being collected is strictly correlated with the surface of the aperture, and as the surface of the aperture is calculated as the square of the radius, this means the 200mm aperture telescope gathers 4 times more light than the 100mm one. This calculation is strictly on the optical/light side, without taking into consideration the SNR factor which you just mentioned (and which is, indeed, crucial, to the overall result). So in conclusion, my though is on par with you: calculate you field of view, meaning choose a certain focal length paired with the necessary camera sensor, and then choose the maximum aperture you can afford, off course with the best field correction, for your sensor. Another benefit to choosing the bigger aperture is that you get better resolution (more finer details), but I guess that is also dependent on the resolution your sky conditions allows for.. and sky conditions change throughout the night...so... But as you said, the discussion can take forever... :D All the best with you YT channel.
The way I view instrument in astronomy is to convert a pallel beam in a given direction into a position on a focal plane . Therefore I have this equation in mind f * dtheta = dx , where f is the focal length, dtheta is angular size of the object you want to oberve and dx is the size of the object. From this equation I know the Total FOV of the instrument and also the resolution in arcsec of a pixel. In this case I only need the focal length. Then I agree, the question is do we have enough photons in the pixel ? Then the Diameter D come into the game. Having fixed the F, it is good to have small F/D or large angle of the converging beam (alpha = D/F). I am not a specialist but at small F/D, there are large angle, so possible distorsion of the image ?
I’ve tested two scopes that were close one F4.8 and another F5.2 both just over 500mm and the super glass clarity and construction made for more SNR, visible detail and better FWHM out of the “slower scope”
I guess there’s also issues in choosing narrowband filters in “fast” systems. There will be shifts in the filters and will require different filters. For fun I looked at JWST and it’s a f/20 with a 143m focal length. They specify 0.1 arc sec per pixel. Insane numbers!
I agree with some previous comments on that complexity of handling a telescope does have an impact, especially considering the initial video. As an example, if you don’t hit collimation of a reflector right, you might get worse images than with a smaller (and „worse“ f-Ratio) telescope (refractor or perfect collimated reflector). Just saying… Happened to me 😉.
Great video as usual!!! 👍👍 I started with a 150 F4, the GSO, and a 294MC as first instruments as I thought that for narrow balcony view speeding up light collection with a low F would be good. After 4 years and a change to a Quattro 150P and a Touptek 571C for smaller pixels with OAG, I'm feeling all the bad effects of high precision collimation required with relative ease of use with a similar APO, even if more expensive. This video helped me in better understanding how things actually work and I'm still pretty satisfied with my rig. But how did you manage collimation with the new focuser? I bought it too but I wasn't able to install the Clicklock as the focuser can't go further in as required by higher thickness of the clicklock and getting a good collimation with the stock blue compression ring is a nightmare!
At a print size of 300ppi, your eyes already do the binning. So even if you don’t bin the camera, it will look pretty much the same as if you did bin and then printed same size image at 150ppi. This tells me that if I have a lot of pixels (and I’m not pixel peeping on my computer screen), the pixel size ratio is not so important. I’d rather keep the resolution. BTW, I learned this in terrestrial photography from a Tony Northrop RUclips video many years ago.
To be honest Quiv, I don't really know enough about it to comment on whether you are right or wrong. Personally I just refer to the astronomy tools field of view calculator and enter my various scope/camera combinations for a given target and see what gives the best looking result.
Cuiv, your videos are great, thank you! I watch many videos from both you and Nico Carver, and any time the topic of focal ratio comes up, I zone out in disappointment. For me, when I plan an evening of pictures, I select an object to capture, with an idea of how I want that framed. So there is an area of sky that I want to capture. Imagine a focal length, aperture, and camera that perfectly covers this frame. Hold the camera constant, it is not part of focal ratio, and hold exposure time constant too. If I decrease the focal length ("faster" system), the image is brighter (that's why it's called faster), but my target frame is smaller. The same number of photons are collected from this target frame, but they fall on fewer pixels. If instead I increase the aperture (also "faster"), the image is also brighter. But the field of view is the same, so I am saving all the pixels. More photons are in my target frame. When I process and crop my resulting image into the target framing, the decreased focal length test has no additional photons, whereas the increased aperture did have more photons. More photons allows more detail. So assuming I make a picture showing only my intended framing, focal length is not the prime factor, aperture is. If I own many scopes, I want to image with the one that has the largest aperture while giving me my full target frame. Any two scopes with exactly my target frame will give me very comparable pictures, regardless of the focal ratio. The faster scope will collect the data in less time, but that is because it will have more aperture. Of course, your analysis goes deeper into noise issues. But I insist that catching the photons I want in my image is most important, and that is aperture!
Wonderful video! I wonder if you can make a video comparing two telescopes with the same aperture but different focal length with the same exposure time on the details. The small focal ratio telescope should have lower noise, but the object occupies smaller portion in the frame, and the large focal ratio one is in contrary.
Kudos to Cuiv for providing an accurate explanation of what Lukomatico found empirically regarding the limitations of assuming that light gathering power (aka speed) of an optical system is determined soley by (1/f)^2. This equation is certainly correct but we need to be careful when we use it. For our astrophotography purposes it is missing a key component of our imaging train - our camera! The light gathered by an optical system per unit area of the sensor is proportional to (D/FL)^2 = (1/f)^2 (where D is the lens or mirror diameter), but to determine the signal landing on a pixel (this determines the quality of an image) we need to multiply by the area of a pixel (proportional to p^2). Just like with the Etendue equation Cuiv mentions in his video, this means that the relative signal per pixel varies as (D x p/FL)^2 = (p/f)^2 (I’m ignore central obstructions, optical transmittance, and sensor quantum efficiency here). We can go one step further to obtain another useful expression. We often speak in terms of the pixel scale PS which is proportional to p/FL. Solving for p, we find that p is proportional to PS x FL. If we substitute this into the Etendue equation we find that the signal per pixel is proportional to (D x PS)^2, where D is the diameter of our optical system (lens or mirror). In Lukomatico’s video he compared two different optical system where he used different cameras so that the two pixel scales were roughly the same. In that case, the signal received per pixel (and hence image quality) is just proportional to D^2, essentially what he found empirically - the telescope with the bigger objective took less time to achieve the same image quality.
Lukomatico didn't prove anything emperically at all. Hand waving over two images from supposedly similar optical systems proves nothing. Intergrate the flux for the two systems and compensate for the differences in gain/filter bandpass, then come back with the empirical data sets, please. Finally, the bigger aperture telescope will only gather light faster if its F ratio is lower than an equivalent telescope. 200mm F4 beats a 200mm F6 every day of the week with the same cameras.
@@neilhankey2514 we are in agreement - the key phrase you used is "with the same camera". His "experiment" was done using different cameras and different scopes which made his video confusing to a an extent. He did it in that way to keep the pixel scale (nearly) constant (great approach for folks with lots of scopes and lots of cameras & would produce images with similar resolution). If you use the same camera when comparing different scopes then the (1/f)^2 expression holds, just as you wrote in your last sentence since the pixel size is the same. note that I didn't write that Lukomatico proved anything, just that he reached a conclusion qualitatively (and maybe semi-quantitatively) relevant to his scenario.
Another observation I have regarding speed, and such is that the f-ratio tells us nothing about the actual T-stop or the actual light transmission. For example, an SCT at f7 compared to an f7 refractor. Both have the same "speed," but the SCT has a significant central obstruction that reduces the effective f-stop. Or at least that's my understanding. lol
Its excellent review because i was very confused about choosing the right rig for astrophotography between 5 rigs only the missing is larger then one inch sensor camera, Than you very much.
Fantastic video Cuiv, I knew something else was involved and other than just the Focal Ratio as I have seen many images with different FR vs. exposure time and the squaring of the FRs did not increase the quality as much as predicted. I like that 'Etendue' formula from John Hayes, he is quite techincal - over my head. I think once you have the Pixel Etendue of your system, can't you just use that in addition to what camera would be best for your optical system using the Resolution Formula ((Pixel Size / Telescope Focal Length) X 206.265). Once you have the optimal Etendue and Resolution determined, if you want a larger FOV, just get a larger sensor camera that has the same pixel size. For example the ASI1600, 2600, and 6200 all have the same size pixels, thus, the same Etendue and Resolution, only the 6200 will give you a much larger FOV - I think... Cheers Kurt
Superb discussion Cuiv! As always, your analysis & presentation tickled the synapses in my brain! Thank you!👍BTW, it should be noted that all of this video & subsequent discussion focuses far more on the "Geek" part of your followers, and less on the "Lazy" part! But that's ok by me.😁Not surprisingly, I do have questions & comments...apologies for the length of the post, but you did get those synapses firing on all cylinders'😬):🤔 1. When you talk about "shot noise", are you referring to (a) 'thermal noise' during a single exposure, (b), 'background noise' due to natural/artificial light pollution as well as scattered light in the optical system, (c) another source of noise, or (d) several or all of the above choices? 2. Correct me if I am wrong, but if 'thermal noise' & 'read noise' remain constant (a given sensor), then won't a 'faster system' (larger aperture for a given focal length) allow shorter exposures to get above the 'background' noise, thereby enabling the imager to capture more sub-exposures during an imaging session (and more subs when the object is high in the sky, when there's no moon, and transparency & seeing are good)? This would increase the number of images that can be stacked in the most optimal conditions, and more stacked images mean less noise in the final stacked image (noise reduced by a factor of 'the square root of N sub-exposures' IFF desired signal is above all sources of noise...)? 3. IMHO, the 'quality' of an image depends on spatial resolution, the dynamic resolution (bit depth), the S/N, and quality of processing. If we're talking about the aesthetic nature of the image, then it would be fair to say that the more pixels in each channel that sample the object, the better, along with the highest dynamic resolution & the highest colour fidelity, AND - yes - S/N. So, having a faster system is better. However, there is another boundary condition to consider, and that is the resolution of the image that is shared (e.g. print of a photo, resolution of the projector/computer screen, and finally, the limits associated with vision (spatial, dynamic, colour resolution of the eye, and viewing distance). 4. Is there such thing as too much resolution? As you have pointed out in past vids, yes! Sampling the sky at a very high resolution often places unrealistic expectations about guiding performance, the effects of atmospheric turbulence ("seeing"), sources of vibration in the system (wind, stomping feet, nearby road/rail traffic, etc), flexure in the system, and so on., 5. IMHO & FWIW...the rule that a sensor should sample the sky at a rate between 1" per pixel to 2" per pixel is FAR too restrictive & often too demanding. I have spent hours upon hours drooling over images taken by CCD & CMOS sensors over the decades, and I can attest to the fact that there a millions of mind-blowing, aesthetically-impactful astro images taken with systems that are sampling the sky at 3" to 5" per pixel. Furthermore, I will suggest that colour fidelity, pleasing levels of contrast, and framing are of equal importance as spatial resolution. Of course, it's better to process a higher resolution image, then reduce its fidelity to deal with the confines of the presentation medium & viewer considerations, but that also presents an exercise in diminishing benefits. So, IMHO, as recreational astro-images who enjoy the process & products of 'taking pretty pictures' (rather than astrometry, variable star research, etc.), we really needs to communicate the sources of enjoyment of the process & final product a lot more, and 'relax' the definition of optimal spatial resolution, especially for newcomers who want to taking deep sky images of extended objects with a relatively short focal length system that is not going to empty the bank account or create a lot of debt. 6. Here's some of the bad news. All imaging involves filtering, including the microlenses over each pixel & in OSCs, the Bayer array..and lower focal ratios degrade the performance of filters, which will affect your S/N, colour balance across (you already discussed some of the reasons on your channel. Overall, systems with a central obstruction & 'fast' focal ratio systems make increase the likelihood that each stage of 'beneficial' filtering will - to some degree - negatively impact the quality of each sub-exposure. BTW, a great discussion of this issue is on the Altair Astro website: www.altairastro.help/info-instructions/cmos/affects-of-telescope-focal-ratio-on-light-pollution-filter-performance/. OK...I'm going to give my tired, ADHD-stressed old brain a break here, and I will leave it to you & others with much younger brains to deal with it all. lol I hope all continues to be well for you, your wife & respective families!😀
1. Shot noise is the noise inherent in the signal - even with an ideal sensor that doesn't have read noise or thermal noise. Shot noise is equal to the square root of the signal - more details in my "noise in astrophotographhy" series! 2. F ratio will allow you to swamp read noise more quickly. However, it's not the number of subframes that matter but the total exposure time (10x10 minutes subframes gives the same SNR as 100x1 minute subframes assuming swamped read noise 3. I'd say resolution of the final image can be controlled in processing (as long as it isn't higher than what the telescope can capture) 4. Yes, but it's surprising how much how far we can go (see recent lukomatico video on binning) 5. True - I have an upcoming review of a FF sensor used with 5"/pixel 6. The bayer filters don't have an issue with fast FR, but narrowband filters will be affected - I have several videos on the topic already :) Cheers!
So it's good to hear one of you confirming physics and optical theory is actually real, as is the focal ratio. Thanks also for the link to the cloudy nights forum and discussion on this topic. The Binning subject needs another video. I invest in fast optics because I'm "time poor living in NL," and like most amatures, I cannot afford to waste a clear night with experimentation. Binning is a great way to maximise your time under a clear sky. Lum = 1x1 and colour = 2x2 since the resolution of the image is coming from the luminance channel(JPEG). This means you can use 1/4 of the time on the RGB data, focusing more on the L channel which might be a Ha filter depending on the target. This is a huge win for mono imagers.
Forgive me if my maths is shaky but when considering a 780mm focal length telescope with either a Flattener or its 0.8 Flattener/Reducer using the same camera in both scenarios: S1/S2 = (6.5/5.2)^2 (1) (1) (1) (1)^2 S1/S2 = (1.25)^2 S1/S2 = 1.5625 The 0.8 reducer delivers 1.5625 times more signal per pixel. However, and here's my question, as the telescope becomes 624mm when fitted with the 0.8 focal reducer the amount of light that each pixel is recording increases by the same amount as the section of sky that pixel is looking at. Surely therefore that also means that the individual pixel is only recording more light because it's being exposed to more sky. So doesn't that also me that the individual galaxies and portions of nebulosity we image with the two setups (assuming the targets are smaller than the whole FOV) are no brighter? Or to put it another way. At 780mm the galaxy covers 1000px with an average value per pixel of 1.0/px while at 624mm the same galaxy covers 800px with an average value of 1.25/px. But the overall signal from the galaxy in both cases is still 1000.
Yep, that's basically it. Whether you decrease focal length or increase the pixel size, the end result is that each pixel sees a larger solid angle of the sky, and at fixed aperture that means more photons per pixel!
Thanks for confirming that. I hear a lot of chats suggesting that the 0.8 Focal Reducer is somehow going to increase the speed by so many ’Stops’. Which of course would be true for a camera lens where the FOV remains the same as the aperture changes. I think this a common source of confusion for many photographers coming into astrophotography.
@@AstroCloudGenerators A focal reducer only reduces the apparent focal length. The resulting image is "brighter" because it's squeezed the same number of photons into what's now, as you note, a smaller object image. Looked at differently,, a FR increases the apparent FOV for a given size sensor. The piper must be paid, however because a FR didn't change the laws of physics. Actual FOV and focal length didn't change, and the total number of photons stays the same. So how does all that manifest itself? Think about what happens at the original edge of the FOV. It's squeezed towards the center. The result is a smaller image circle! So, if your sensor size fits well within the reduced image circle, you're fine. If it doesn't you'll see vignetting. Might as well crop the sensor down. Question then becomes, are you better off with a FR or a FF? To which I say the answer is obviously * 😅
I have not been a fan of describing telescopes as fast or slow based on f-ratio. I do understand that luminance at the sensor is completely defined by the f-ratio. Lower f-ratio, more luminance in a linear relationship. Aperture doesn't even factor in. For two telescopes with the same aperture, the one with with the shorter focal length will make a image brighter with more luminance. Are you getting more light with the shorter focal length? No, Just a smaller image with the light more concentrated. The longer focal length telescope will be just as "fast" if the pixel size is made bigger vs. that which is used on the shorter focal length telescope in a linear relationship. I agree it is better to take a total system view with ones objectives in mind and pick out the best compromise. I further think that is best to stick to the fundamental measurements of aperture, focal length, and pixel size when thinking about imaging systems. The f-ratio is not a fundamental measurement as it is focal length divided by aperture.
I own a 65mm quintuplet refractor fell for the hype with refractors but since switched to a orion 8 inch astrograph got it used for 150$ and the difference in image quality is amazing i use a asi 533 MC Pro so the fov seems to be perfect for the 8 inch stars are sharp only problem is my mount i own a skywatcher eqm 35 so its pushing the limits lol.
My head just exploded! … Seriously tho great video Cuilv . Worth doing a deep dive into the subject . I do wish that you would include DSLRs more often in your discussions tho .
Glad this was helpful! In the case how should I include DSLRs? They work exactly the same (sensor size, pixel size, lens raw focal length, lens aperture diameter)
I think a white board would help... Other than that it was an interesting comparison of the various factors and their relationships to each other in a theoretically perfect system! 👍👍
I use Aperture net area * (pixel scale^2) to compare different scopes for speed. Interestingly, a 150mm/1200mm f8 Newtonian with a 533MM Pro in bin 2 mode is twice as fast as my 90mm TS CF f6 APO in bin 1, for similar pixel scales. I might take a chance on the f8 newt as a galaxy scope with a 1.3'/px image scale, the 533MM binned would be 1500 x 1500 px, so resolution should be OK
That's actually effectively the same thing as the formula presented, without central obstruction or transmittance (pixel size, focal length, and aperture are used, equivalent to focal ratio and pixel size :) )
It should be related to the "resolution".. Instead to pixel, use it as per Arcsecond and it would later be adjusted with seeing and under/oversalmpling. :)
@@CuivTheLazyGeek You are absolutely right. But perhaps convert the pixel scale as well as its noise to arcsec scale (read noise/") would be a nice cheat for that :D I quite often think in arcsecond instead pixel scale because the old times with analog film that brought us the idea of F/D as speed :)
@@robsonhahn don't forget that when you're computing something like read noise/arcsecond, you have to use the squares of the read noise! So if you have a pixel with 4 e of read noise that covers 4 arcseconds squared of sky, we have 2e of read noise per arcseconds squared of sky and not 1e!
Thanks for the video. Nice to watch, as usual. But. Image circle is also important. Consider same focal length but one telescope with f2 and one with f4. The one with f4 can be used with full frame but the one with f2 accepts only smaller sensors, let's say with a size of 1/4 of a full frame sensor (Ok, not soo realistic, but one can get the idea, I hope). In the same time both telescops can image the same given big area of sky, thus they are equaly fast for the purpose of mosaics. Further, if you want to do narrowband with the f2 scope you need a wider bandpass leading to more signal from light polution, inducing noise, thus worse SNR. Your superfast f2 telescope with, say, 6nm preshifted filters performs now worse than a decent f4 telescope with 3nm filters. Such considerations should be included in the formula.... Clear skies
To reuse your comment format: Good points! But. The image circle is whatever the manufacturer says - there are no criteria for it, or how much vignetting is acceptable with the image circle. So it's difficult to actually compare image circles of different manufacturers without knowing exactly their criteria or their MTF :) Overall this is taken into account by the sensor size I mention in the video, as the sensor étendue formula, making the assumption that the telescope supports that sensor size. Narrowband is true as well, I've done multiple videos on the topic, but I can't touch on everything in a single video. Should also include spot diagrams. And budget. And, etc. etc. There's just too much for a single video so I have to stick to one topic
It's just a ratio , by itself means nothing . Aperture , pixel scale , sensor efficiency, sensor size , exposure and all the other variables of sending photons down a tube with varying quality optics and alignment let alone atmospheric seeing make far more difference than F3 to f4 , which is just an expression of field of view! Focal ratio is something for terrestrial cameras with variable aperture so variable focal ratio and an abundance of light.
Bonjour, vous avez la réponse précise et détaillée dans les "pages techniques" du site web de Thierry Legault (et dans son livre bien entendu): aller dans "pages techniques" puis "l'obstruction"... (le site de Thierry est en français et en anglais) You have the precise and detailed answer in the "technical pages" of Thierry Legault's website (and in his book of course): go to "technical pages" then "obstruction"... (Thierry’s website is in French and English)
About binning color images, DSS has a technique called super pixel debayering, which is 2x2 binning and debayering at the same time. This solves the issue of interpolating colors and at the same time it boosts the signal at the expense of resolution. To me this sounds like the best of both worlds for a color camera with small pixels. What's your opinion on this approach?
That sounds like the standard super pixel technique, which does work great at the cost of resolution :) It's a good technique, but it all depends on what equipment we are looking at, seeing, etc.
I guess this explains why I could never work out why a 11” Rasa is better than an 8” Rasa, despite both being F2 but having different aperture sizes. 😮
I'd approach this subject from the pixel scale side of things. You used the 715 sensor with tiny pixels. Compared to the 585 it made your imaging rig 1/4 as fast, but was it worth it? I think it was in some ways. The real question is, what can we get away with, under what use case and conditions?
Does shot noise scale in the same way though? Is it actually pushing you back by half… I think how tall the grass is might deviate. I understand what you mean, but it might be worth mentioning it’s light pollution dependent. Nice summary of parameters, thanks.
Ive finally settled on the flexibility of a C925 edge with 0.7 reducer & Hyperstar V4 I wish I could get a imx571 camera with slightly smaller than my current 3.76 micron pixels to try and take advantage of the resolution my aperture is capable of though.
So my C9.25 isn't really f/10 (2350÷235). But in reality it is 235mm objective lens diameter - 85mm (the obstruction diameter of 36%), then divided by the focal length of 2350, resulting in f15, not the f/10 as commonly advertised and always referred as. So the Origin isn't truly a f/2.2 and a C6 with Hyperstar is really an f/2. Correct? Thanks Cuiv for a video that's making me think! Brilliant!
So that's not correct! You can't just subtract 85mm from 235mm, because 1mm at the edge of the aperture is very different than 1mm close to the center. You need to subtract the two areas and deduce an equivalent aperture, which in your case is 219mm, for a transmission ratio result of T10.7, excluding loss to corrector plates and mirrors (the focal ratio is still F10, because of its definition - important to keep it as is, as it also determines the largest incidence angle through filters)
But remember guys .... the F2 scope deliveres 2 times better SNR, but if the F4 tries to keep up with the 2 times better SNR, it has to integrate 4 times longer. And i think reducing capture time and getting more of those rare moonless clear nights is absolutely key. At the end we are watching our astrophotos on our PC for example as a full screen wallpaper, or like a print of a specific size. And no matter of what equipment we use, we are always looking at the same size on our results. And the one thing that determines the quality of our result the most is the complete sum of photons we are looking at. It doesn´t matter that much if we are looking at a picture with 3840 x 2160 compared to 7680 x 4320 pixels if we invested in both variants the same amount of photons. The higher resolution picture will have a worse SNR, but the higher resolution pixels are displayed 4 times smaller, which will equal out. The sum of photons we collect is mainly determined by aperture (area in mm²!) and sensor size (area in mm²!) and smaller factors like QE, central obstruction, light transmission of filter etc.. To compare: aperture of 50mm refractor: ~1950mm² area. 200mm newton: over 31.400mm². That is factor x16 difference! Also sensors. IMX533 = ~71mm², fullframe = 864mm². That is factor x12 difference! Keeping those huge numbers in mind, all those other factors like QE, read-noise or what ever you will find will become negligible.
It is also much, much more expensive to make good sub F/4 optics, and filters for them. It is always better to have focal length that suits your camera, and optics that draw well at that focal length than going for speed at expense of FL or optical quality. If you get something like Epsilon, the cost of the scope is only the very beginning... and the FL is only good for limited number of things.
So... using the f Ratio comparison formula… your Redcat 51 @ f4.9^2 is Allegedly 4.16 times faster than your SE6/C6 @ f10^2 “Everything Else Being Equal”... BUT it Rarely IS in Comparison's… (If I understand this Correctly…) the Capture area of your SE6/C6 has a 14% Secondary Mirror Obstruction by Area (Celestron) so (150mm^2 / 51mm^2 = 8.65X more Capture Area… 8.65 X 0.86 Less Obstruction = 7.44X) IOW the SE6/C6 has 7.44X the True Capture Area/Light Gathering Ability of the Redcat 51… Making the SE6/C6’s True Advantage (7.44 / 4.16 = 1.79X) Actually 1.79X Faster Optics for Exposures, and ~3X the Resolving Power that your Redcat 51, In addition the SE6/C6 Image will be (Square Root 1.79 = 1.34) ~34% Less Noisy??? "My Brain Hurts" -Monty Python's Flying Circus 🍺🍻 ahhhh 😎
I made a good choice.. buying Askar fra 400.. 5.6.. plus reducer 280 f 4.. and 2600 mc pro.. But what about the bigger rig.. like 800 -1200 mm IAM still doing research... And I have no idea what to choose.. Edge hd 8inch with some reducers..or refractor like askar 120 apo... Help,😁😁😁😁🤯
I love it..."And that's basically it." LOL!! Sometimes I think you're really an AI bot, Cuiv! Seriously, thanks so much for all this. After I watch another 30 times, I think I'll be able to soak all this up. I wonder if there really IS an AI app that can ask us what we want to do and what equipment/sky we have, and spit out..."Here's the best solution." Thanks again, Cuiv!
Very interesting, although I find topics like this are usually not too relevant to where I live due to poor seeing being the norm, and decent transparency rarer still . lol
Probably can't link another YT video but Sky Story has a great video "Understanding Focal Length: Trading Speed for Detail" to help resolve our understanding of this topic
I just had a look! The conclusion and some of the explanation is very good... but the main part of the explanation (with the light rays crossing) is incredibly incorrect since the target is effectively at infinity. I'll leave a comment on his video, that's actually quite bad (he's explaining with the light rays crossing why an in-focus image is brighter per pixel than an out of focus image, NOT why focal ratio causes this...)
@@CuivTheLazyGeek I was so focused on some the different aspects he brings up I didn't pay near enough attention to that! I did think the twists in the rays was rather peculiar and even distracting. Thanks for responding and pointing it out. I only want to perpetuate factual information.
@@old_photons unfortunately Cliff seems to be confidently wrong in this case... Several others have tried to convince him on Facebook where we also posted a link to his video, but he's not backing down... Oh well, it is what it is!
@@CuivTheLazyGeek Sure enough, I've been following the YT comments over there. I plan to watch the physics class posting you shared and see what other optics stuff I can find to broaden my own understanding, while I wait for new content from all my regular content creators :)
The optic laws inform us that long focal length increases the darkness of the background. The spot size in linked tio F/D ratio. The energy captured is linked to size of the main mirror Therefore each instrument has its own noise level. Now we have for the sensor, the pixel size the gain according its spectrum sensitivity and its saturation limit. Of course bigger the pixel bigger its capacity to capture energy. But once again what type of telescpe it is attached to will have an impact of results. If we use a sensor with short F/D telescope the sensor will be quickly saturated. But of course the size of telescope mirror gives the capacity of the resolution , the details. This is why bigger mirroir with the same F/D is very attractive for the same field of view. The spot size is the same but the captured details are very different.... But this is valid in space only. On Earth the story is much different. And this why Antartic can offer a better seing just because the air can be stabilize over 1 m diameter . At this position any amateur telescope can reach the theorical optical resolution . But usually the air in good weather condition is in the range of 200mm dia which is also the limit of tracking resolution on the telescope mounts . Of course in altitude with dry air the situation improves a lot. This is why we have big telescopes on some places in the World. For the sensor choice the matching of separation power is a starting point. Choosing small fixel will not change anything for better resolution. And so far for the C14 at F2 is he best compromise to people not having the opportunity to get a exceptional site to observe. The resolution a bove this diametre the resolution is difficult because the Airy spot is victim of atmosphere instability and it is cumpulsory to find a site in altitude with stable air. It is clear the manufacturer will not deliverd the quality of its instrument only if it costs a fortune, because in optics, time spent to get first class instrument , is considered as a art product. I learn that the compromise with CMOS is in the range of 5 microns pixel size. Why? because to reach 5 microns spots at 15mm of the optical focus center is very difficult to produce. Under this value, the binding is not an option for industrial telescopes. Of course il we are living in a perfect site and have a instrument capable to give spot under 5microns a 15mm , the situation is different. I'm not sure this details were expressed in this video,,,
Cuiv - I am having difficulty understanding why a longer focal length results in a dimmer object. For a given aperture size, wouldn’t the photons fall onto a tighter FOV due to a longer focal length and therefore more photons per area of FOV? I know this isn’t right because from observational astronomy I can easily tell that a higher focal length results in a lower brightness of image. What am I missing?
Basically look at it from a resolution perspective: if the pixel size stays constant but the focal length increases, each pixel will see a smaller area of the sky (better resolution) but as a result fewer photons will reach that pixel (you can imagine The White Wall nebula that has a constant photon flux across its surface, if a pixel sees one arcsecond squared of that nebula, it receives 4 times fewer photons than a pixel that sees 4 arcseconds squared of the nebula!)
??? Head exploding. Like many in this hobby, I have acquired a few telescopes and cameras. As I have a range of focal lengths, I got cameras that would give a good pixel scale (between 1 and 2 arc seconds/pixel). But I'm not sure the pairings are the best. FL/Aperture/Camera- RedCat51- 250mm/51mm/ASI183MC Pro gives a 2.01 x 3.02 degree FOV; AT115 with 0.08 F/R- 645mm/115mm/ASI294MC Pro gives a 1.15 x 1.70 degree FOV; AT115- 806mm/115mm/ASI294MC Pro gives a 0.92 x 1.36 degree FOV; Askar71F- 490mm/71mm/ASI585MC Pro gives a 0.93 x 1.25 degree FOV; F-numbers are 4.9, 5.6, 7.0, and 6.9, respectively. The last two appear very similar but the AT115 has the best aperture and probably optics.
I think there should be also considered central obstruction impact. you can have carzy fast f/2 RASA telescope but when half of the "aperture" is covered by camera you can't say that it is still real f/2 ratio and this telescope gathers "the same amount of light" as lets say F/2 refractor without central obstruction. ok. i see it i included in further calculations.
But it does, think about this, if I add 1mm to the center of a mirror the additional area is small, right? If I add that same 1mm to the edge of the mirror, well thats a much larger additional area. Now the central obstruction does affect the contrast. That's why visual telescopes like dob's keep the central obstruction small(22%).
But doens't the niose reduction possibilities kind of make the pixel size question less important ? I'm not an astrophotographer yet, but I'm a photographer and wih the powerful noise reduction tools we have now resolution seems to be alwas the winner noise artefacts are also smaller on smaller pixels)
Another great example of the minutiae that over thinking astrophotographers focus on. You nailed it. The reality you need good optics, good camera, good skies( transparency and seeing ). That’s the ratio they need to make. Help people not buy unnecessary equipment that can’t perform in their location . B9 sky planewave vs B1 sky Askar. Askar win every day. Just an example of how I think about it. Just one man opinion. Same goes for the mounts. Why spend so much for a mount that your system doesn’t need. Story is similar to the whole flats and dark flats. So much b to get .000001 percent better image i use sky flats at what ever time they turn out to be and bias like you. That’s it. I’m so glad in the “brilliant” brain of yours. You have common sense !!! Thanks again for the hard work.
En français on précise "étendue géomètrique" ou "étendue optique": en effet, "étendue" utilisée seule est la traduction de "extent". In French, we specify “étendue géomètrique" or "étendue optique": in fact, "étendue" used alone is the translation of “extent”.
what really confuses me is that we use the same terms in different ways when we are talking about "regular" lenses and cameras v telescopes. 100mm lens v 100 mm telescope is not the same?? F-stop is not the same F you discuss in this video. T stop seems like it is similar to the T you were talking about, but not exactly the same thing. so how do we compare these two world of imagery? i know more about cameras that telescopes, but i dont know how to translate that knowledge into the telescope world.
Yeah in photography often "aperture" is the focal ratio and the physical aperture diameter of the lens is ignored. And a 100mm lens refers to the focal length, rather than a 100mm telescope refers to the aperture diameter. And then you have cropped ratios, full frame equivalent focal lengths, etc to make things more complex. This is because photographers and astrophotographers have different priorities!
@@CuivTheLazyGeek i kind of understand the different needs, but would be nice to know how this lens compares to that telescope, etc. sounds like a great subject for a video.... thank you for the reply
I didn’t get a minivan, but I bought a rooftop tent so I could sleep overnight at my dark site and still be functional at work. Went from only imaging on Friday and Saturday to any night the conditions are good. Best investment in AP I have made.
Depends on what you put in front of them? Pixel size needs to be matched to image scale "Astro Tools" have a nice calculator for this. Google it. As a general rule short focal length scope = small pixels. Longer focal length scopes = larger pixels.
I’m not sure if Cuiv is correct. F2 gathers 4x more light than F4. I think a 2min exposure with both have the same shot noise. Why would the shot noise increase with a faster scope?
Because shot noise is equal to the square of signal! More signal automatically means more shot noise. Mathematically this is because a rain of photons onto a detector can be modeled by a Poisson Distribution, whose standard deviation is the square root of its mean!
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/cuivlazygeek . You’ll also get 20% off an annual premium subscription.
My Patreon: www.patreon.com/cuivlazygeek
My Merch Store: cuiv.myspreadshop.com/
Link to the 6um pixels camera: tinyurl.com/yjwd836e
Link to the 2.9um pixels camera: tinyurl.com/4y9dz774
RedCat51: bit.ly/48hyuVx (Agena) or bit.ly/48pTWXW (HPS)
Amazon affiliate: amzn.to/49XTx01
Agena affiliate: bit.ly/3Om0hNG
High Point Scientific affiliate: bit.ly/3lReu8R
First Light Optics affiliate: tinyurl.com/yxd2jkr2
All-Star Telescope affiliate: bit.ly/3SCgVbV
Astroshop eu Affiliate: tinyurl.com/2vafkax8
Lukomatico's video: ruclips.net/video/V2L6bmr8nuA/видео.html
Oh Cuiv, you're awesome! I love your comment toward the end of the video, which is to start by thinking about the FOV and resolution you want to achieve - i.e. what kind of targets are you planning to shoot - then work back from there to determine your ideal configuration. Even the practical aspects of how much time you can consistently get on target (due to weather in your area or the fact that you have to pack-in/pack-out each time) has an important impact. These factors are so much more important to *start with* than "is this rig better than that," etc. For me in L.A., shooting small-to-medium nebulae and larger galaxies over limited periods of time, a 6" Newt with smallish, low-noise OSC sensor works great.
You definitely have the best astrophotography channel. The fact that you give so much time to explain (and do) all that stuff is proportional to your dedication and knowledge of astrophotography raised in the n power. Definitely my fav channel relative to such a content.
Thanks so much!!
Forgeting the math, what I understod:
1 - Matching the goal to the equipements and seeing conditions are fundamental to get the best results from a given set of equipaments;
2 - More light and resolution power comes form bigger apertures, bigger objective brings more light;
3 - Focal lenght impacts field of view;
4 - Smaller sensors will have bigger magnification and smaller field of view. Small sensor looks good for small bright objects, and with bright objets, small pixels are not a problem;
5 - Bigger sensors will have lower magnification and bigger field of view. Bigger sensors looks good for bigger faint objects, and with faint objets, bigger pixels are important;
6 - For faint objects long exposure is needed and long exposure calls for a good guiding;
7 - A good workflow and post processing is fundamental to get better results.
this is a good summary. as far as I understand.
Good summary overall! Although remember that if a large and a small sensor have the same pixel size, you could always crop the images of the large sensor to accomplish exactly the same results as the small sensor :)
@@CuivTheLazyGeekNow you've done it. (Large sensor with small pixels) You've added another constraint. BUDGET!! 😅
what would be a good workflow, can you list it please?
@@JuanGutiérrez_cl I think some thing like taking flats, darks and bias, taking images from objects high in the sky, archiving everything in a organized way, using a standard naming for files and folders, making sure all is collimated, clear, in focus and aligned, testing your mount for guiding errors. After the image session, choosing the best images for processing, and so on. In other words it is like to make a task list to be sure you always does every thing that has to be done to get the best from your equipment every time.
I'm learning but I don't have equipment to do images yet, so this all comes from reading and watching videos, so take this into account.
Great video! I had a couple of comments:
1) With respect to QE in various formulas (etendue, etc.), don't just copy the listed QE value from your camera manufacturer's site directly. This value is usually actually "peak QE," so the maximum quantum efficiency over the whole spectrum of light. You probably don't care about that specific wavelength, so its probably better to look at the QE graph and determine the relevant QE for a wavelength you're actually imaging, like Halpha.
2) On software binning - one thing not frequently mentioned is that smaller pixel sizes generally tend to have worse fill factor (that is, what percentage of area of the square that they occupy is actually photosensitive). Modern sensors try to mitigate this with microlenses, but this sometimes results in suboptimal QE across the whole spectrum compared to a larger pixel. If you're imaging in a weird wavelength or have some specific scientific need, larger pixels are probably a good idea. I suspect this part of why lots of professional telescopes use giant 9 and 12 micron pixels compared to the 3-5 range we use for our gear.
Both good points! I didn't want to go in the QE curves since I show that in other videos, and I do allude to the loss to the "borders" of pixels in subtitles, which indeed are worse with small pixels!
I made the mistake of chasing speed. I found that quality optics, quality mount, and properly matched camera to the optics is what matters most. So now I shoot mostly with APO refractors. Yes it takes longer to get the desired SNR for a good image, but the results are far superior than what I got from an SCT or newt with similar SNR after processing. Better stars, better contrast, and in some cases better details despite smaller apertures being used.
Good point as well!
THIS!
One other thing I would add is that fast systems tend to be the hardest to use. Extremely fast lenses are bound to have optical aberrations, sometimes beyond the ability to fix. So while you are capturing a ton of signal, your image might not be so good. People should consider overall optical quality when buying a scope.
Very true as well!
Yes! Optical aberrations including seeing. Worse seeing would indicate larger pixels/binning.
As an astrophotographer moving from beginner to amateur, and looking to upgrade my beginner rig. I think this will help me out as soon as I can fully understand it lol. I did not know about all the different little factors that affected the signal coming into the camera. I use a little Cannon EOS M200 and just upgraded my mount from a SAM to an AM5, so telescope and cameras are next. This video will definitely help me understand those purchases a bit better when I can make them. So I appreciate the effort that you put into explaining this!
Congrats on your AM5, and glad this will be helpful!
The bad news is that when we figure it all out then the atmosphere and light pollution are still the two elephants in the room... 😊
In the Uk add in geo-engineering/global dimming, and blanket grey clouds
Yep, in the end the easiest is to just move to a dark areas haha
@@CuivTheLazyGeekI'm in a dark area, I can't figure all this stuff out.
Why can't I just get an app that I can enter in my equipment?
Of course if I have already bought it all I suppose the trick then is to make sure I'm using the perfect settings.
But then even if I'm using the perfect settings.....
How do I know if I'm doing the perfect stacking? Maybe I should throw away half my subs? Maybe I should only use 5% of my subs?
And what about calibration frames?
I read that I should take my flat frames at the same focus point as my lights.
But I also read that as the temp changes during the night, I should be changing my focus.
Ok so now I take 30 Ha frames and I end up with 5 different focus positions during the night.
Morning comes. I have a Moonlite focuser. So I know exactly the focus position of each handful of subs.
So now I have a data set from one night that I took 3 narrowband filters of data, each with multi focus positions and all with different flats. All use the same darks and biases.
And I do this for a week.
This i did in this last week. We had super clear nights with great seeing and perfect transparency.
First time it's happened for me in three years.
I got 34 hours of total exposure time.
How the hell do I combine all the calibration frames and stack my images properly???
Is there a way to get NINA to use the flats wizard to read the focus positions from the file name and shoot the flats at the right focus positions?
The Moonlite focuser has zero backlash so I'm certain I get repeatable focus positions.
There are so many variables I really wish AI could figure out using my equipment specs and location and weather and atmospherics to tell me what gain and offset and exposure lengths to set Nina at.
There has to be a way to figure all this out if it's so easy to mathematically figure it out.
Cuiv, work your magic in Nina.
You're our only hope .
@@christopherleveck6835 First things first! Flats don't care about small (and even somewhat large) focus changes (I have a video explaining why), so don't worry about that!
Cool, informative video. It's always a pleasure to watch your videos. Greetings from light polluted Wrocław (Poland) 🙂
I was really looking for an a physical interpretation of those parameters in order to understand what is good and what is bad. I had to make many pauses to think about what you are saying. I absolutely need to bookmark this video because I will have to come back to it later. Many thanks for your pedestrian tutorial.
This video is awesome! I didn't see this when it was published (because of my arm; and I just got done making a similar video!). Just one comment: If you double the SNR of a scope, then it will take 1/4 the amount of time to get the same SNR because SNR follows a square root function. So it really is 4x faster.
Aperture always wins, the focal ratio is just the outcome of how much angular field you want to cram onto a given sized sensor. What everyone misses is how critical focus, collimation, and tracking accuracy is to detecting faint objects. If you can lower your FWHM from say 3.5 arc seconds to 2.5 it's about one magnitude fainter limit of detection. The other thing I think is worth pointing out is that spot size (the physical size of the Airy disk on the sensor) depends ONLY on focal ratio; it's about 5 microns at f/4, so at f/2 you can't even sample the diffracted spot with most cameras and in fact the RASA design isn't even diffraction limited (per the Celestron published spot diagrams). People get way too hung up on this and I don't buy this concept of pixel entendue. It doesn't mean anything. The whole point of it was to talk about productivity, meaning you want to survey X square degrees of sky down to Y limiting magnitude in the minimum amount of time. The RASA11 will do about six square degrees to 20.5 magnitude in about six minutes; that's its productivity, but a f/10 telescope will cover 0.1 square degrees to the same limiting magnitude in the same time. So when you are paying for a "fast" optic you should be paying for the max size sensor that goes with it because you want the highest productivity (square degrees of sky image output per hour of imaging) not because you want to image some small target in less time than the slower system.
Not nearly as simple as you make it seem. Yes your want the maximum sensor size to fit the image circle (allowing for edge roll off) to maximize productivity, but there are a ton of parameters to consider if you want to survey objects down to magnitude -20.5 😅
That would have made this video hours long. 😅
Thanks Cuiv for cutting yourself off where you did!! * 😅
Hi Cuiv. I think for a lot of people a simple list with field of view/resolution and recommended focal ratio/ pixel size would be helpful. Like a check list. If I want to fotograph Andromeda with best details what telescope/camera is recommended. Thanks for the deep dive. Math was not too complicated 😅
There's so much that's up to personal preference as well as site conditions that I'm not sure it's that easy...!
Great talk Cuiv on an interesting subject.
There has been a lot of talk on the forums about all of this.
Another related issue is the difference between point sources (stars) and extended sources in how the final image is processed.
Another good, recent, source is the talk on AIC "The Quest For Aperture: Why are big Telescopes better"
Absolutely! I added the subtitle at the start of the video to mention I talk only about extended sources :)
At the end of the AIC talk was a spreadsheet that you could download to compare different setups.
If I've summarized this video correctly in my head: 1) Have solid idea of the angular extent of the object(s) you wish to have in your field of view first.
2) Then (if money is no object) pick the camera (sensor) and focal length & aperture that is most efficient for acquiring the subject at the best possible resolution given the local seeing conditions.
This is why having, as an astrophotographer, several different scopes (FL's & aperture combos) that enable a wide variety of subjects along with a number of different cameras to obtain the best possible images.
Example: Wide field nebula's (NGC7000 with the Pelican Nebula or the entire Veil nebula complex) a short tube refractor with camera sporting the IMX 571 sensor (e.g. ZWO Asi 2600 MC/MM pro) or better will maximize detail while providing an image across the full field of view.
While if imaging distant galaxies (e.g. not M31 or M33), an 8" or greater SCT, Newt, Mak-Newt etc.. with an IMX533 sensor (e.g. ZWO Asi 533 MM/MC pro) will enable a more restricted field of view around a subject such as M51 or M101 and allow resolution of some of the nebulosity in some of the spiral arms (i.e. small details)..
Of course there is always Mosaic'ing but I digress.
Did capture the essence and tensions correctly?
This is pretty good as a summary, although in the last step, you could use the IMX571 instead of the 533 and just crop! The two sensors are almost identical besides their size!
@@CuivTheLazyGeek I wrote IMX533 but I think I really meant the IMX585. You're right above as I believe the pixel scale is the same with IMX533 and IMX571 sensors, just one sensor is larger than the other..
This stuff gave me a bit if a headache lol but I'll get my head around it eventually. You definitely know your stuff dude.
Thanks Nick! Sorry I'm unable to express it more clearly - I'm planning on making a video of "how do telescopes work" with actual lens simulation software so hopefully that will help!
As you said, there are a whole lot more things that significantly impacts the imaging and final images, beside the differences between focal ratio.
People tend to think: "I bought this fast telescope, so now I can capture images faster so I don't need so many clear skies anymore".
That's only partially true. There is a whole other bag of issues that just opened up and there is plenty of other things to worry about, including but not limited to sensor capability (camera could be over exposing on bright objects) , unforseen or previously unseen optical aberrations , since now the small issues are magnified to be larger issues. The list goes on regarding light pollution vs fast optics or filter compatibility vs fast optics and so on. Nothing is simple is in this hobby, nothing is straightforward and it's one of those hobbies where A+B does not equal AB but something more complicated.
Absolutely! And there's of course image circle, vignetting, etc.
I'm interested in trying film astrophotography (I despise the processing side of the hobby, unfortunately...) which is essentially optimizing a single frame for SNR and exposure. It's an interesting puzzle that today's guiding tech could make much, much easier than it was 20 years ago. That said, with all of the starlink crap flying around now, I'll never be able to get a clear shot without satellite trails.
Still worth a shot, I'm sure there are people doing it, but yeah no idea what happens to satellite trails
Yep it's all about sampling and pixel density. Thanks for pointing this out Cuiv, this will help people make more informed decisions on their setup.
Thanks Dave! I'm surprised there's so little controversy at this stage :)
Great vid again Cuiv, merci!
Sadly, it all start with the budget. In an ideal world, meaning I had 10k$ to spend, I would:
1- first select the resolution -> 1"/pixel to 2"/pixel
2- then select the bigger sensor that keeps me away from a lot of tilt issues -> APS-C (23.5 x 15.7mm, or 23'500um x 15'700um)
3- then select the field of view I want -> min 2° x 1.4° (ratio of APS-C) = 7200" x 5000"
4- Compute the max focal length I need: Fmax = 206 * 23'500um / 7200" = max 670mm
5- Compute the pixel size I need: P = resolution (1 to 2") * F / 206 -> I need P in between 3.2 and 6.4um
6- Choose the cam. imx571 sensor is APS-C with 3.76um pixels! -> 2k$!!! :(
7- Compute the min focal length I need: Fmin = P * 206 / resolution_max = 3.76um * 206 / 2" = min 390mm
8- Choose type of telescope where I can add data from one year to another -> avoid spikes on bright stars, no spiders to hold the secondary, no Newt
9- Choose type of telescope where not so good collimation doesn't affect much the image quality -> focal ratio min = min 4
10 - It leaves me with refractors, select the bigger size that weights less than 15kg -> 120mm -> 3k$!!! :(
11 - 120mm F7 refractors have focal length of 840mm, add a good focal reducer 0.7x that fully illuminate and APS-C-> 500$
12 - choose a mount that can guide 20kg reliably -> 2-3k$
But if I have less than 5k$ to spend, all that beautiful plan disappears :(
Hahaha yes! It's always within budget constraints :)
I live in Argentina, 5k dollars is 1 year of work :( :( :(, I dont have plans xD
Great exposition!!
I saw Luke's video and wondered if binning made any sense, considering that the data collected by the sensor doesn't care if I have just one huge "fake" pixel or if it's split in smaller pixels. Now that you mention software binning, you answered my doubt.
I really enjoy these nerdy videos. It's not the first time I say it: you make perfect sense of everything in a way that's not usually seen among the "astro-geek" RUclips community.
To make things even better: I learned something. When comparing systems, I always compared signal gathering instead of SNR. Great food for thought, thanks!!
I'm glad this was helpful and that it made sense! Cheers!
Fantastic video as always Cuiv! I love your more technical approach to explaining, it's always very interesting and you explain things really well. Thanks for your hard work!
Much appreciated!
Cuiv, Thanks for keeping your topics relevant and interesting. Particularly for beginner/amateur astrophotographers like me. 🔭
My pleasure!
Great video! A lot of stuff I'm having to learn about. So I'll be watching this video for several times to understand everything better.But this is very useful to me!👌🏻✨👍🏻 Thanks!!
Glad this is useful, and that you enjoyed the video!
Really good video, Cuiv.
I remember I had some debate here with my colleagues, and the conclusion I've reached is the following, assuming some points:
1. we assume we will be using the same camera sensor for the comparison of two telescopes (so the same pixel size, quantum efficiency, read noise, thermal noise, and so on).
2. we assume that both telescopes have the same focal length, so the end pixel scale in arcsec/pixel is the same, and thus the field of view is the same
Now, having this, we compare two telescopes with two different aperture sizes, but same focal length. Let's say 100mm aperture and 200mm aperture. My thinking is that the total efficiency of one's setup, meaning the efficiency the setup can "catch" light with certain sky condition, is only dependent of the aperture size. More diameter means more light coming in the telescope tube, which, in our example, "falls" on the same sensor.
As the quantity of light being collected is strictly correlated with the surface of the aperture, and as the surface of the aperture is calculated as the square of the radius, this means the 200mm aperture telescope gathers 4 times more light than the 100mm one. This calculation is strictly on the optical/light side, without taking into consideration the SNR factor which you just mentioned (and which is, indeed, crucial, to the overall result).
So in conclusion, my though is on par with you: calculate you field of view, meaning choose a certain focal length paired with the necessary camera sensor, and then choose the maximum aperture you can afford, off course with the best field correction, for your sensor.
Another benefit to choosing the bigger aperture is that you get better resolution (more finer details), but I guess that is also dependent on the resolution your sky conditions allows for.. and sky conditions change throughout the night...so...
But as you said, the discussion can take forever... :D
All the best with you YT channel.
This kind of detailed videos are very interesting. Thank you!
Glad you like them!
Another great informative video. These videos make my morning commute much better
Glad I could make your morning commute better!
The way I view instrument in astronomy is to convert a pallel beam in a given direction into a position on a focal plane . Therefore I have this equation in mind f * dtheta = dx , where f is the focal length, dtheta is angular size of the object you want to oberve and dx is the size of the object. From this equation I know the Total FOV of the instrument and also the resolution in arcsec of a pixel. In this case I only need the focal length. Then I agree, the question is do we have enough photons in the pixel ? Then the Diameter D come into the game. Having fixed the F, it is good to have small F/D or large angle of the converging beam (alpha = D/F). I am not a specialist but at small F/D, there are large angle, so possible distorsion of the image ?
I’ve tested two scopes that were close one F4.8 and another F5.2 both just over 500mm and the super glass clarity and construction made for more SNR, visible detail and better FWHM out of the “slower scope”
FWHM is a very important figure in SNR as well, so that's a very good point!
I guess there’s also issues in choosing narrowband filters in “fast” systems. There will be shifts in the filters and will require different filters. For fun I looked at JWST and it’s a f/20 with a 143m focal length. They specify 0.1 arc sec per pixel. Insane numbers!
Yep, absoluely! NB filters are annoying at high FRs... And yes the JWST, Hubble, etc. numbers are always a lot of fun to look at!
Thank you for addressing this perennial issue so thoroughly and without mentioning “the Poisson Distribution “
Hehehe cheers!
I would like to see practical comparisons, thats what I like about Luc’s video. Just like you defied common saying about doing astro from Bortle 8-9.
I agree with some previous comments on that complexity of handling a telescope does have an impact, especially considering the initial video. As an example, if you don’t hit collimation of a reflector right, you might get worse images than with a smaller (and „worse“ f-Ratio) telescope (refractor or perfect collimated reflector). Just saying… Happened to me 😉.
That's absolutely true! There are many factors that do come into play!
Great video as usual!!! 👍👍 I started with a 150 F4, the GSO, and a 294MC as first instruments as I thought that for narrow balcony view speeding up light collection with a low F would be good. After 4 years and a change to a Quattro 150P and a Touptek 571C for smaller pixels with OAG, I'm feeling all the bad effects of high precision collimation required with relative ease of use with a similar APO, even if more expensive. This video helped me in better understanding how things actually work and I'm still pretty satisfied with my rig. But how did you manage collimation with the new focuser? I bought it too but I wasn't able to install the Clicklock as the focuser can't go further in as required by higher thickness of the clicklock and getting a good collimation with the stock blue compression ring is a nightmare!
Great video, I would watch a sequel
Thank you! I'll have to think on how to make a sequel :)
Superb video my friend, I really enjoyed it!! :-D Thanks for all that you do for the astro world! 👍👍
Thanks so much Luke!
At a print size of 300ppi, your eyes already do the binning. So even if you don’t bin the camera, it will look pretty much the same as if you did bin and then printed same size image at 150ppi. This tells me that if I have a lot of pixels (and I’m not pixel peeping on my computer screen), the pixel size ratio is not so important. I’d rather keep the resolution. BTW, I learned this in terrestrial photography from a Tony Northrop RUclips video many years ago.
Brilliant, thanks for this VERY detailed video. It answered most of my questions about this subject!
Glad this was helpful!
To be honest Quiv, I don't really know enough about it to comment on whether you are right or wrong. Personally I just refer to the astronomy tools field of view calculator and enter my various scope/camera combinations for a given target and see what gives the best looking result.
That works too, keeping things simple :) thank for the feedback and for your support!
Cuiv, your videos are great, thank you!
I watch many videos from both you and Nico Carver, and any time the topic of focal ratio comes up, I zone out in disappointment. For me, when I plan an evening of pictures, I select an object to capture, with an idea of how I want that framed. So there is an area of sky that I want to capture.
Imagine a focal length, aperture, and camera that perfectly covers this frame. Hold the camera constant, it is not part of focal ratio, and hold exposure time constant too. If I decrease the focal length ("faster" system), the image is brighter (that's why it's called faster), but my target frame is smaller. The same number of photons are collected from this target frame, but they fall on fewer pixels. If instead I increase the aperture (also "faster"), the image is also brighter. But the field of view is the same, so I am saving all the pixels. More photons are in my target frame.
When I process and crop my resulting image into the target framing, the decreased focal length test has no additional photons, whereas the increased aperture did have more photons. More photons allows more detail. So assuming I make a picture showing only my intended framing, focal length is not the prime factor, aperture is.
If I own many scopes, I want to image with the one that has the largest aperture while giving me my full target frame. Any two scopes with exactly my target frame will give me very comparable pictures, regardless of the focal ratio. The faster scope will collect the data in less time, but that is because it will have more aperture.
Of course, your analysis goes deeper into noise issues. But I insist that catching the photons I want in my image is most important, and that is aperture!
But isn't that exactly what the video is saying?
Wonderful video! I wonder if you can make a video comparing two telescopes with the same aperture but different focal length with the same exposure time on the details. The small focal ratio telescope should have lower noise, but the object occupies smaller portion in the frame, and the large focal ratio one is in contrary.
Kudos to Cuiv for providing an accurate explanation of what Lukomatico found empirically regarding the limitations of assuming that light gathering power (aka speed) of an optical system is determined soley by (1/f)^2. This equation is certainly correct but we need to be careful when we use it. For our astrophotography purposes it is missing a key component of our imaging train - our camera! The light gathered by an optical system per unit area of the sensor is proportional to (D/FL)^2 = (1/f)^2 (where D is the lens or mirror diameter), but to determine the signal landing on a pixel (this determines the quality of an image) we need to multiply by the area of a pixel (proportional to p^2). Just like with the Etendue equation Cuiv mentions in his video, this means that the relative signal per pixel varies as (D x p/FL)^2 = (p/f)^2 (I’m ignore central obstructions, optical transmittance, and sensor quantum efficiency here). We can go one step further to obtain another useful expression. We often speak in terms of the pixel scale PS which is proportional to p/FL. Solving for p, we find that p is proportional to PS x FL. If we substitute this into the Etendue equation we find that the signal per pixel is proportional to (D x PS)^2, where D is the diameter of our optical system (lens or mirror). In Lukomatico’s video he compared two different optical system where he used different cameras so that the two pixel scales were roughly the same. In that case, the signal received per pixel (and hence image quality) is just proportional to D^2, essentially what he found empirically - the telescope with the bigger objective took less time to achieve the same image quality.
Lukomatico didn't prove anything emperically at all. Hand waving over two images from supposedly similar optical systems proves nothing. Intergrate the flux for the two systems and compensate for the differences in gain/filter bandpass, then come back with the empirical data sets, please. Finally, the bigger aperture telescope will only gather light faster if its F ratio is lower than an equivalent telescope. 200mm F4 beats a 200mm F6 every day of the week with the same cameras.
@@neilhankey2514 we are in agreement - the key phrase you used is "with the same camera". His "experiment" was done using different cameras and different scopes which made his video confusing to a an extent. He did it in that way to keep the pixel scale (nearly) constant (great approach for folks with lots of scopes and lots of cameras & would produce images with similar resolution). If you use the same camera when comparing different scopes then the (1/f)^2 expression holds, just as you wrote in your last sentence since the pixel size is the same. note that I didn't write that Lukomatico proved anything, just that he reached a conclusion qualitatively (and maybe semi-quantitatively) relevant to his scenario.
Yep, (D*PS)^2 is effectively the per pixel étendue :)
@@CuivTheLazyGeek Cuiv you are awesome. This retired physicist has learned so much from your videos and my images have benefited greatly. thank you!
@@MichaelHundley-k4p Thats great, I'm just a little sensative about people perpetuating the F ratio myth.
Interesting, thanks for the video, the subject is confusing to many people.
Another observation I have regarding speed, and such is that the f-ratio tells us nothing about the actual T-stop or the actual light transmission. For example, an SCT at f7 compared to an f7 refractor. Both have the same "speed," but the SCT has a significant central obstruction that reduces the effective f-stop. Or at least that's my understanding. lol
Its excellent review because i was very confused about choosing the right rig for astrophotography between 5 rigs only the missing is larger then one inch sensor camera,
Than you very much.
Glad this is helpful!
Fantastic video Cuiv, I knew something else was involved and other than just the Focal Ratio as I have seen many images with different FR vs. exposure time and the squaring of the FRs did not increase the quality as much as predicted. I like that 'Etendue' formula from John Hayes, he is quite techincal - over my head. I think once you have the Pixel Etendue of your system, can't you just use that in addition to what camera would be best for your optical system using the Resolution Formula ((Pixel Size / Telescope Focal Length) X 206.265). Once you have the optimal Etendue and Resolution determined, if you want a larger FOV, just get a larger sensor camera that has the same pixel size. For example the ASI1600, 2600, and 6200 all have the same size pixels, thus, the same Etendue and Resolution, only the 6200 will give you a much larger FOV - I think... Cheers Kurt
Superb discussion Cuiv! As always, your analysis & presentation tickled the synapses in my brain! Thank you!👍BTW, it should be noted that all of this video & subsequent discussion focuses far more on the "Geek" part of your followers, and less on the "Lazy" part! But that's ok by me.😁Not surprisingly, I do have questions & comments...apologies for the length of the post, but you did get those synapses firing on all cylinders'😬):🤔
1. When you talk about "shot noise", are you referring to (a) 'thermal noise' during a single exposure, (b), 'background noise' due to natural/artificial light pollution as well as scattered light in the optical system, (c) another source of noise, or (d) several or all of the above choices?
2. Correct me if I am wrong, but if 'thermal noise' & 'read noise' remain constant (a given sensor), then won't a 'faster system' (larger aperture for a given focal length) allow shorter exposures to get above the 'background' noise, thereby enabling the imager to capture more sub-exposures during an imaging session (and more subs when the object is high in the sky, when there's no moon, and transparency & seeing are good)? This would increase the number of images that can be stacked in the most optimal conditions, and more stacked images mean less noise in the final stacked image (noise reduced by a factor of 'the square root of N sub-exposures' IFF desired signal is above all sources of noise...)?
3. IMHO, the 'quality' of an image depends on spatial resolution, the dynamic resolution (bit depth), the S/N, and quality of processing. If we're talking about the aesthetic nature of the image, then it would be fair to say that the more pixels in each channel that sample the object, the better, along with the highest dynamic resolution & the highest colour fidelity, AND - yes - S/N. So, having a faster system is better. However, there is another boundary condition to consider, and that is the resolution of the image that is shared (e.g. print of a photo, resolution of the projector/computer screen, and finally, the limits associated with vision (spatial, dynamic, colour resolution of the eye, and viewing distance).
4. Is there such thing as too much resolution? As you have pointed out in past vids, yes! Sampling the sky at a very high resolution often places unrealistic expectations about guiding performance, the effects of atmospheric turbulence ("seeing"), sources of vibration in the system (wind, stomping feet, nearby road/rail traffic, etc), flexure in the system, and so on.,
5. IMHO & FWIW...the rule that a sensor should sample the sky at a rate between 1" per pixel to 2" per pixel is FAR too restrictive & often too demanding. I have spent hours upon hours drooling over images taken by CCD & CMOS sensors over the decades, and I can attest to the fact that there a millions of mind-blowing, aesthetically-impactful astro images taken with systems that are sampling the sky at 3" to 5" per pixel. Furthermore, I will suggest that colour fidelity, pleasing levels of contrast, and framing are of equal importance as spatial resolution. Of course, it's better to process a higher resolution image, then reduce its fidelity to deal with the confines of the presentation medium & viewer considerations, but that also presents an exercise in diminishing benefits.
So, IMHO, as recreational astro-images who enjoy the process & products of 'taking pretty pictures' (rather than astrometry, variable star research, etc.), we really needs to communicate the sources of enjoyment of the process & final product a lot more, and 'relax' the definition of optimal spatial resolution, especially for newcomers who want to taking deep sky images of extended objects with a relatively short focal length system that is not going to empty the bank account or create a lot of debt.
6. Here's some of the bad news. All imaging involves filtering, including the microlenses over each pixel & in OSCs, the Bayer array..and lower focal ratios degrade the performance of filters, which will affect your S/N, colour balance across (you already discussed some of the reasons on your channel. Overall, systems with a central obstruction & 'fast' focal ratio systems make increase the likelihood that each stage of 'beneficial' filtering will - to some degree - negatively impact the quality of each sub-exposure. BTW, a great discussion of this issue is on the Altair Astro website: www.altairastro.help/info-instructions/cmos/affects-of-telescope-focal-ratio-on-light-pollution-filter-performance/.
OK...I'm going to give my tired, ADHD-stressed old brain a break here, and I will leave it to you & others with much younger brains to deal with it all. lol I hope all continues to be well for you, your wife & respective families!😀
1. Shot noise is the noise inherent in the signal - even with an ideal sensor that doesn't have read noise or thermal noise. Shot noise is equal to the square root of the signal - more details in my "noise in astrophotographhy" series!
2. F ratio will allow you to swamp read noise more quickly. However, it's not the number of subframes that matter but the total exposure time (10x10 minutes subframes gives the same SNR as 100x1 minute subframes assuming swamped read noise
3. I'd say resolution of the final image can be controlled in processing (as long as it isn't higher than what the telescope can capture)
4. Yes, but it's surprising how much how far we can go (see recent lukomatico video on binning)
5. True - I have an upcoming review of a FF sensor used with 5"/pixel
6. The bayer filters don't have an issue with fast FR, but narrowband filters will be affected - I have several videos on the topic already :)
Cheers!
@@CuivTheLazyGeek Thank you, Cuiv, for taking the time to read my missive & respond to my points. I will check out that series!
So it's good to hear one of you confirming physics and optical theory is actually real, as is the focal ratio. Thanks also for the link to the cloudy nights forum and discussion on this topic. The Binning subject needs another video. I invest in fast optics because I'm "time poor living in NL," and like most amatures, I cannot afford to waste a clear night with experimentation.
Binning is a great way to maximise your time under a clear sky. Lum = 1x1 and colour = 2x2 since the resolution of the image is coming from the luminance channel(JPEG). This means you can use 1/4 of the time on the RGB data, focusing more on the L channel which might be a Ha filter depending on the target. This is a huge win for mono imagers.
Always a pleasure :)
Forgive me if my maths is shaky but when considering a 780mm focal length telescope with either a Flattener or its 0.8 Flattener/Reducer using the same camera in both scenarios:
S1/S2 = (6.5/5.2)^2 (1) (1) (1) (1)^2
S1/S2 = (1.25)^2
S1/S2 = 1.5625
The 0.8 reducer delivers 1.5625 times more signal per pixel.
However, and here's my question, as the telescope becomes 624mm when fitted with the 0.8 focal reducer the amount of light that each pixel is recording increases by the same amount as the section of sky that pixel is looking at. Surely therefore that also means that the individual pixel is only recording more light because it's being exposed to more sky. So doesn't that also me that the individual galaxies and portions of nebulosity we image with the two setups (assuming the targets are smaller than the whole FOV) are no brighter?
Or to put it another way. At 780mm the galaxy covers 1000px with an average value per pixel of 1.0/px while at 624mm the same galaxy covers 800px with an average value of 1.25/px. But the overall signal from the galaxy in both cases is still 1000.
Yep, that's basically it. Whether you decrease focal length or increase the pixel size, the end result is that each pixel sees a larger solid angle of the sky, and at fixed aperture that means more photons per pixel!
Thanks for confirming that. I hear a lot of chats suggesting that the 0.8 Focal Reducer is somehow going to increase the speed by so many ’Stops’. Which of course would be true for a camera lens where the FOV remains the same as the aperture changes. I think this a common source of confusion for many photographers coming into astrophotography.
@@AstroCloudGenerators A focal reducer only reduces the apparent focal length. The resulting image is "brighter" because it's squeezed the same number of photons into what's now, as you note, a smaller object image. Looked at differently,, a FR increases the apparent FOV for a given size sensor.
The piper must be paid, however because a FR didn't change the laws of physics. Actual FOV and focal length didn't change, and the total number of photons stays the same. So how does all that manifest itself? Think about what happens at the original edge of the FOV. It's squeezed towards the center. The result is a smaller image circle!
So, if your sensor size fits well within the reduced image circle, you're fine. If it doesn't you'll see vignetting. Might as well crop the sensor down.
Question then becomes, are you better off with a FR or a FF?
To which I say the answer is obviously *
😅
I have not been a fan of describing telescopes as fast or slow based on f-ratio. I do understand that luminance at the sensor is completely defined by the f-ratio. Lower f-ratio, more luminance in a linear relationship. Aperture doesn't even factor in. For two telescopes with the same aperture, the one with with the shorter focal length will make a image brighter with more luminance. Are you getting more light with the shorter focal length? No, Just a smaller image with the light more concentrated. The longer focal length telescope will be just as "fast" if the pixel size is made bigger vs. that which is used on the shorter focal length telescope in a linear relationship.
I agree it is better to take a total system view with ones objectives in mind and pick out the best compromise. I further think that is best to stick to the fundamental measurements of aperture, focal length, and pixel size when thinking about imaging systems. The f-ratio is not a fundamental measurement as it is focal length divided by aperture.
I own a 65mm quintuplet refractor fell for the hype with refractors but since switched to a orion 8 inch astrograph got it used for 150$ and the difference in image quality is amazing i use a asi 533 MC Pro so the fov seems to be perfect for the 8 inch stars are sharp only problem is my mount i own a skywatcher eqm 35 so its pushing the limits lol.
My head just exploded! … Seriously tho great video Cuilv . Worth doing a deep dive into the subject . I do wish that you would include DSLRs more often in your discussions tho .
Glad this was helpful! In the case how should I include DSLRs? They work exactly the same (sensor size, pixel size, lens raw focal length, lens aperture diameter)
I think a white board would help... Other than that it was an interesting comparison of the various factors and their relationships to each other in a theoretically perfect system! 👍👍
Thanks!
@@CuivTheLazyGeek 👍👍
I use Aperture net area * (pixel scale^2) to compare different scopes for speed. Interestingly, a 150mm/1200mm f8 Newtonian with a 533MM Pro in bin 2 mode is twice as fast as my 90mm TS CF f6 APO in bin 1, for similar pixel scales. I might take a chance on the f8 newt as a galaxy scope with a 1.3'/px image scale, the 533MM binned would be 1500 x 1500 px, so resolution should be OK
That's actually effectively the same thing as the formula presented, without central obstruction or transmittance (pixel size, focal length, and aperture are used, equivalent to focal ratio and pixel size :) )
Cuiv IS Awesome! You are very welcomed 😊
Thank you!
It should be related to the "resolution"..
Instead to pixel, use it as per Arcsecond and it would later be adjusted with seeing and under/oversalmpling. :)
That's a good approach that some people take, although then you do dissociate from the read noise of each pixel!
@@CuivTheLazyGeek You are absolutely right. But perhaps convert the pixel scale as well as its noise to arcsec scale (read noise/") would be a nice cheat for that :D
I quite often think in arcsecond instead pixel scale because the old times with analog film that brought us the idea of F/D as speed :)
@@CuivTheLazyGeek by the way... you should try some analog film imaging :D
@@robsonhahn don't forget that when you're computing something like read noise/arcsecond, you have to use the squares of the read noise! So if you have a pixel with 4 e of read noise that covers 4 arcseconds squared of sky, we have 2e of read noise per arcseconds squared of sky and not 1e!
Thanks for the video. Nice to watch, as usual.
But.
Image circle is also important. Consider same focal length but one telescope with f2 and one with f4. The one with f4 can be used with full frame but the one with f2 accepts only smaller sensors, let's say with a size of 1/4 of a full frame sensor (Ok, not soo realistic, but one can get the idea, I hope).
In the same time both telescops can image the same given big area of sky, thus they are equaly fast for the purpose of mosaics.
Further, if you want to do narrowband with the f2 scope you need a wider bandpass leading to more signal from light polution, inducing noise, thus worse SNR.
Your superfast f2 telescope with, say, 6nm preshifted filters performs now worse than a decent f4 telescope with 3nm filters.
Such considerations should be included in the formula....
Clear skies
To reuse your comment format:
Good points!
But.
The image circle is whatever the manufacturer says - there are no criteria for it, or how much vignetting is acceptable with the image circle. So it's difficult to actually compare image circles of different manufacturers without knowing exactly their criteria or their MTF :) Overall this is taken into account by the sensor size I mention in the video, as the sensor étendue formula, making the assumption that the telescope supports that sensor size.
Narrowband is true as well, I've done multiple videos on the topic, but I can't touch on everything in a single video.
Should also include spot diagrams. And budget. And, etc. etc. There's just too much for a single video so I have to stick to one topic
@@CuivTheLazyGeek
Right. I did not understand etendue in the first place. Thanks for clarifying.
CS
It's just a ratio , by itself means nothing . Aperture , pixel scale , sensor efficiency, sensor size , exposure and all the other variables of sending photons down a tube with varying quality optics and alignment let alone atmospheric seeing make far more difference than F3 to f4 , which is just an expression of field of view!
Focal ratio is something for terrestrial cameras with variable aperture so variable focal ratio and an abundance of light.
Awesome, very informative!
Glad it was helpful!
How about a reflector Telescope ? Does the secundary Mirror obstruction can Change the real focal ratio?
Bonjour,
vous avez la réponse précise et détaillée dans les "pages techniques" du site web de Thierry Legault (et dans son livre bien entendu): aller dans "pages techniques" puis "l'obstruction"... (le site de Thierry est en français et en anglais)
You have the precise and detailed answer in the "technical pages" of Thierry Legault's website (and in his book of course): go to "technical pages" then "obstruction"... (Thierry’s website is in French and English)
It's mentioned in the video and in the formula presented! And yes you can also go down the rabbit hole with the link provided by the other comment :)
About binning color images, DSS has a technique called super pixel debayering, which is 2x2 binning and debayering at the same time. This solves the issue of interpolating colors and at the same time it boosts the signal at the expense of resolution. To me this sounds like the best of both worlds for a color camera with small pixels. What's your opinion on this approach?
That sounds like the standard super pixel technique, which does work great at the cost of resolution :) It's a good technique, but it all depends on what equipment we are looking at, seeing, etc.
I guess this explains why I could never work out why a 11” Rasa is better than an 8” Rasa, despite both being F2 but having different aperture sizes. 😮
I'd approach this subject from the pixel scale side of things. You used the 715 sensor with tiny pixels. Compared to the 585 it made your imaging rig 1/4 as fast, but was it worth it? I think it was in some ways. The real question is, what can we get away with, under what use case and conditions?
Does shot noise scale in the same way though? Is it actually pushing you back by half… I think how tall the grass is might deviate. I understand what you mean, but it might be worth mentioning it’s light pollution dependent. Nice summary of parameters, thanks.
Ive finally settled on the flexibility of a C925 edge with 0.7 reducer & Hyperstar V4 I wish I could get a imx571 camera with slightly smaller than my current 3.76 micron pixels to try and take advantage of the resolution my aperture is capable of though.
Yeah right now the 585 provides smaller pixels but the sensor is small as well :/
So my C9.25 isn't really f/10 (2350÷235). But in reality it is 235mm objective lens diameter - 85mm (the obstruction diameter of 36%), then divided by the focal length of 2350, resulting in f15, not the f/10 as commonly advertised and always referred as. So the Origin isn't truly a f/2.2 and a C6 with Hyperstar is really an f/2. Correct?
Thanks Cuiv for a video that's making me think! Brilliant!
So that's not correct! You can't just subtract 85mm from 235mm, because 1mm at the edge of the aperture is very different than 1mm close to the center. You need to subtract the two areas and deduce an equivalent aperture, which in your case is 219mm, for a transmission ratio result of T10.7, excluding loss to corrector plates and mirrors (the focal ratio is still F10, because of its definition - important to keep it as is, as it also determines the largest incidence angle through filters)
But remember guys .... the F2 scope deliveres 2 times better SNR, but if the F4 tries to keep up with the 2 times better SNR, it has to integrate 4 times longer. And i think reducing capture time and getting more of those rare moonless clear nights is absolutely key.
At the end we are watching our astrophotos on our PC for example as a full screen wallpaper, or like a print of a specific size. And no matter of what equipment we use, we are always looking at the same size on our results. And the one thing that determines the quality of our result the most is the complete sum of photons we are looking at. It doesn´t matter that much if we are looking at a picture with 3840 x 2160 compared to 7680 x 4320 pixels if we invested in both variants the same amount of photons. The higher resolution picture will have a worse SNR, but the higher resolution pixels are displayed 4 times smaller, which will equal out.
The sum of photons we collect is mainly determined by aperture (area in mm²!) and sensor size (area in mm²!) and smaller factors like QE, central obstruction, light transmission of filter etc.. To compare: aperture of 50mm refractor: ~1950mm² area. 200mm newton: over 31.400mm². That is factor x16 difference!
Also sensors. IMX533 = ~71mm², fullframe = 864mm². That is factor x12 difference!
Keeping those huge numbers in mind, all those other factors like QE, read-noise or what ever you will find will become negligible.
It is also much, much more expensive to make good sub F/4 optics, and filters for them. It is always better to have focal length that suits your camera, and optics that draw well at that focal length than going for speed at expense of FL or optical quality. If you get something like Epsilon, the cost of the scope is only the very beginning... and the FL is only good for limited number of things.
That's a very good point!
Thanks
Thank you for your support!!
So... using the f Ratio comparison formula… your Redcat 51 @ f4.9^2 is Allegedly 4.16 times faster than your SE6/C6 @ f10^2 “Everything Else Being Equal”...
BUT it Rarely IS in Comparison's…
(If I understand this Correctly…) the Capture area of your SE6/C6 has a 14% Secondary Mirror Obstruction by Area (Celestron) so (150mm^2 / 51mm^2 = 8.65X more Capture Area… 8.65 X 0.86 Less Obstruction = 7.44X) IOW the SE6/C6 has 7.44X the True Capture Area/Light Gathering Ability of the Redcat 51…
Making the SE6/C6’s True Advantage (7.44 / 4.16 = 1.79X) Actually 1.79X Faster Optics for Exposures, and ~3X the Resolving Power that your Redcat 51, In addition the SE6/C6 Image will be (Square Root 1.79 = 1.34) ~34% Less Noisy???
"My Brain Hurts" -Monty Python's Flying Circus 🍺🍻 ahhhh 😎
I made a good choice.. buying Askar fra 400.. 5.6.. plus reducer 280 f 4..
and 2600 mc pro..
But what about the bigger rig.. like 800 -1200 mm IAM still doing research... And I have no idea what to choose..
Edge hd 8inch with some reducers..or refractor like askar 120 apo...
Help,😁😁😁😁🤯
I love it..."And that's basically it." LOL!! Sometimes I think you're really an AI bot, Cuiv! Seriously, thanks so much for all this. After I watch another 30 times, I think I'll be able to soak all this up. I wonder if there really IS an AI app that can ask us what we want to do and what equipment/sky we have, and spit out..."Here's the best solution." Thanks again, Cuiv!
Would be interesting to ask ChatGPT :p but then AI is often confidently wrong :)
@@CuivTheLazyGeek Yes, "confidently wrong" is very true!! I wonder if the AI would try to kill you when you told it that it was wrong. 😳Thanks, Cuiv!
Very interesting, although I find topics like this are usually not too relevant to where I live due to poor seeing being the norm, and decent transparency rarer still . lol
Probably can't link another YT video but Sky Story has a great video "Understanding Focal Length: Trading Speed for Detail" to help resolve our understanding of this topic
I just had a look! The conclusion and some of the explanation is very good... but the main part of the explanation (with the light rays crossing) is incredibly incorrect since the target is effectively at infinity. I'll leave a comment on his video, that's actually quite bad (he's explaining with the light rays crossing why an in-focus image is brighter per pixel than an out of focus image, NOT why focal ratio causes this...)
@@CuivTheLazyGeek I was so focused on some the different aspects he brings up I didn't pay near enough attention to that! I did think the twists in the rays was rather peculiar and even distracting. Thanks for responding and pointing it out. I only want to perpetuate factual information.
@@old_photons unfortunately Cliff seems to be confidently wrong in this case... Several others have tried to convince him on Facebook where we also posted a link to his video, but he's not backing down... Oh well, it is what it is!
@@CuivTheLazyGeek Sure enough, I've been following the YT comments over there. I plan to watch the physics class posting you shared and see what other optics stuff I can find to broaden my own understanding, while I wait for new content from all my regular content creators :)
There isn't much replacement for displacement, I prefer just having a bigger aperture and reduce the issues from having a wuper low ratio.
Cuiv, stop reading at Brilliant. You are getting too smart. LOL. Love your channel.
Hahaha thank you!
The optic laws inform us that long focal length increases the darkness of the background. The spot size in linked tio F/D ratio. The energy captured is linked to size of the main mirror Therefore each instrument has its own noise level. Now we have for the sensor, the pixel size the gain according its spectrum sensitivity and its saturation limit. Of course bigger the pixel bigger its capacity to capture energy. But once again what type of telescpe it is attached to will have an impact of results. If we use a sensor with short F/D telescope the sensor will be quickly saturated. But of course the size of telescope mirror gives the capacity of the resolution , the details. This is why bigger mirroir with the same F/D is very attractive for the same field of view. The spot size is the same but the captured details are very different.... But this is valid in space only. On Earth the story is much different. And this why Antartic can offer a better seing just because the air can be stabilize over 1 m diameter . At this position any amateur telescope can reach the theorical optical resolution . But usually the air in good weather condition is in the range of 200mm dia which is also the limit of tracking resolution on the telescope mounts . Of course in altitude with dry air the situation improves a lot. This is why we have big telescopes on some places in the World. For the sensor choice the matching of separation power is a starting point. Choosing small fixel will not change anything for better resolution. And so far for the C14 at F2 is he best compromise to people not having the opportunity to get a exceptional site to observe. The resolution a bove this diametre the resolution is difficult because the Airy spot is victim of atmosphere instability and it is cumpulsory to find a site in altitude with stable air. It is clear the manufacturer will not deliverd the quality of its instrument only if it costs a fortune, because in optics, time spent to get first class instrument , is considered as a art product. I learn that the compromise with CMOS is in the range of 5 microns pixel size. Why? because to reach 5 microns spots at 15mm of the optical focus center is very difficult to produce. Under this value, the binding is not an option for industrial telescopes. Of course il we are living in a perfect site and have a instrument capable to give spot under 5microns a 15mm , the situation is different. I'm not sure this details were expressed in this video,,,
Cuiv - I am having difficulty understanding why a longer focal length results in a dimmer object. For a given aperture size, wouldn’t the photons fall onto a tighter FOV due to a longer focal length and therefore more photons per area of FOV? I know this isn’t right because from observational astronomy I can easily tell that a higher focal length results in a lower brightness of image. What am I missing?
Basically look at it from a resolution perspective: if the pixel size stays constant but the focal length increases, each pixel will see a smaller area of the sky (better resolution) but as a result fewer photons will reach that pixel (you can imagine The White Wall nebula that has a constant photon flux across its surface, if a pixel sees one arcsecond squared of that nebula, it receives 4 times fewer photons than a pixel that sees 4 arcseconds squared of the nebula!)
Thank you!
??? Head exploding. Like many in this hobby, I have acquired a few telescopes and cameras. As I have a range of focal lengths, I got cameras that would give a good pixel scale (between 1 and 2 arc seconds/pixel). But I'm not sure the pairings are the best. FL/Aperture/Camera-
RedCat51- 250mm/51mm/ASI183MC Pro gives a 2.01 x 3.02 degree FOV;
AT115 with 0.08 F/R- 645mm/115mm/ASI294MC Pro gives a 1.15 x 1.70 degree FOV;
AT115- 806mm/115mm/ASI294MC Pro gives a 0.92 x 1.36 degree FOV;
Askar71F- 490mm/71mm/ASI585MC Pro gives a 0.93 x 1.25 degree FOV;
F-numbers are 4.9, 5.6, 7.0, and 6.9, respectively. The last two appear very similar but the AT115 has the best aperture and probably optics.
Your shirt: a subtle reminder that all calculations are for naught if you can’t see through the stuff.
This just takes away whole joy from astrophotography, theory 😂 Just go outside and capture photons, enjoy.
I love the theoretical aspect too, but maybe that's just me :)
Surely focal ratio is only comparable if everything else stays same? Comparing f ratio of two totally different aperture scopes is meaningless?
It's not meaningless either, but that's answered more in the video :)
I think there should be also considered central obstruction impact. you can have carzy fast f/2 RASA telescope but when half of the "aperture" is covered by camera you can't say that it is still real f/2 ratio and this telescope gathers "the same amount of light" as lets say F/2 refractor without central obstruction. ok. i see it i included in further calculations.
But it does, think about this, if I add 1mm to the center of a mirror the additional area is small, right? If I add that same 1mm to the edge of the mirror, well thats a much larger additional area. Now the central obstruction does affect the contrast. That's why visual telescopes like dob's keep the central obstruction small(22%).
But doens't the niose reduction possibilities kind of make the pixel size question less important ? I'm not an astrophotographer yet, but I'm a photographer and wih the powerful noise reduction tools we have now resolution seems to be alwas the winner noise artefacts are also smaller on smaller pixels)
I wish I could say I have no idea what you are talking about, but unfortunately I do. Sometimes, ignorance is bliss :)
Hahaha that's not a bad thing though!
Another great example of the minutiae that over thinking astrophotographers focus on. You nailed it. The reality you need good optics, good camera, good skies( transparency and seeing ). That’s the ratio they need to make. Help people not buy unnecessary equipment that can’t perform in their location . B9 sky planewave vs B1 sky Askar. Askar win every day. Just an example of how I think about it. Just one man opinion. Same goes for the mounts. Why spend so much for a mount that your system doesn’t need. Story is similar to the whole flats and dark flats. So much b to get .000001 percent better image i use sky flats at what ever time they turn out to be and bias like you. That’s it. I’m so glad in the “brilliant” brain of yours. You have common sense !!! Thanks again for the hard work.
Thanks Anthony! It's still fun to think of the minutiae, but yeah at the end of the day, want better pictures? Images from a B1! ;)
Surely Cuiv is awesome?
Hahaha thanks :p
@@CuivTheLazyGeek ty for the vids
Cuiv - that French word for aperture squared times sensor area. How do you spell it?
Étendue
En français on précise "étendue géomètrique" ou "étendue optique": en effet, "étendue" utilisée seule est la traduction de "extent".
In French, we specify “étendue géomètrique" or "étendue optique": in fact, "étendue" used alone is the translation of “extent”.
ça y es, tu t'es encore "étendu" 🤣
voire détendu 😂
Étendu un peu trop sur ce sujet c'est sûr !
Great video, and congrats on your 50k subs, 👏🏻👏🏻 I am 1 away from 900 😂😂
It's a whole journey!
Phew!
Indeed! :p
what really confuses me is that we use the same terms in different ways when we are talking about "regular" lenses and cameras v telescopes. 100mm lens v 100 mm telescope is not the same?? F-stop is not the same F you discuss in this video. T stop seems like it is similar to the T you were talking about, but not exactly the same thing. so how do we compare these two world of imagery? i know more about cameras that telescopes, but i dont know how to translate that knowledge into the telescope world.
Yeah in photography often "aperture" is the focal ratio and the physical aperture diameter of the lens is ignored. And a 100mm lens refers to the focal length, rather than a 100mm telescope refers to the aperture diameter. And then you have cropped ratios, full frame equivalent focal lengths, etc to make things more complex. This is because photographers and astrophotographers have different priorities!
@@CuivTheLazyGeek i kind of understand the different needs, but would be nice to know how this lens compares to that telescope, etc. sounds like a great subject for a video....
thank you for the reply
Are used to consider MTF of a Scope ? it will consider all theses aspect .
I didn't buy a dream rig. I instead bought a minivan to haul my equipment to darker skies 😊
I didn’t get a minivan, but I bought a rooftop tent so I could sleep overnight at my dark site and still be functional at work. Went from only imaging on Friday and Saturday to any night the conditions are good. Best investment in AP I have made.
Now that's a good investment!
The answer to everything is =42.
Surely more photons hit the larger sensor than the smaller sensor?
Depends on what you put in front of them? Pixel size needs to be matched to image scale "Astro Tools" have a nice calculator for this. Google it. As a general rule short focal length scope = small pixels. Longer focal length scopes = larger pixels.
Yes, as mentioned in the video :)
@@CuivTheLazyGeek its why i went with 533 over 585. Seems the wiser choice
Cuiv the master teacher, thank you sir, I learn so much from you and Dark Rangers Inc, two of the best.
Glad this was helpful!
Similar theme in this video that gives us a spreadsheet to compare diferent optical trains ruclips.net/video/HiJoqQp1qFI/видео.html
Hey, that's the guy you wrote the formula ;)
Rats! So, I bought my Hyperstar for nothing?
I’m not sure if Cuiv is correct. F2 gathers 4x more light than F4. I think a 2min exposure with both have the same shot noise. Why would the shot noise increase with a faster scope?
Hahaha no you didn't buy the Hyperstar for nothing! After all it's a relatively large aperture compared to things like fracs ;)
Because shot noise is equal to the square of signal! More signal automatically means more shot noise. Mathematically this is because a rain of photons onto a detector can be modeled by a Poisson Distribution, whose standard deviation is the square root of its mean!
@@CuivTheLazyGeek I see. So 1hour at F2 is equal to 2hours at F4 and not 4hours like I always thought?
@@indysbike3014 No it equals 4 hours, but 4 hours only means doubling SNR