Great explanation. Interestingly the human eye has evolved with the wiring in front of the sensor (retina) whilst squid eyes have the wiring sensibly behind the sensor. We have standard cmos and squid have BSI cmos. Squid vision is many times more sensitive.
My God you are a genius, with my Machining centered education I have a hard time taking photography and radio/television theory and putting them into the same box. Physics wise that is what it is but for me it's hard to see the similarities (no pun intended)...I am going to have to watch this video about 12 times to understand it fully, Thank you for putting out content like this!
Well done. As a ham, I appreciate the antenna analogy, although most of your subscribers probably do not. In astrophotography cameras, the term gain is commonly used (and not iso.) Mono cameras are preferred for serious deep sky work, although they cost a bit more. Example: ZWO ASI1600mm. Filter wheels are used with LRGB and narrowband filters.
You should teach this in a university. I'm impressed by the amount of knowledge you've packed over the years when it comes to light and transmission. I've learned a lot with this video so thank you! I still can't figure out what difference it makes concretely in terms of image quality not having a chance to compare both outcomes but I'll do some research
5years ago Damn! The new Leica M11 Monochrom has BSI CMOS Monochromatic sensor. Its a 15 stops of dynamic range. And You talked about this 5 years ago. Amazing.
Hmmm... great lesson. However, I would've thought the "volume" control on the radio analogy more related to the ISO, whereas the "gain" button more analagous to gaining better Signal-to-Noise. If you turn up the volume on a radio, you increase the volume including the noise, but if you improve the gain, you gain better signal-to-noise.
Each pixel on a color image sensor consist of 4 sub-pixel with have 3 different color of filter under the micro lens (1 Red, 2 Green, 1 Blue). If the sensor is monochrome, there is no need for sub-pixel for each color, hence the pixel can be bigger (given that the resolution is the same) to collect more light, plus collect even more light due to without the color filter. For more, read this page: wilddogdesign.co.uk/blog/monochrome-digital/
Actually no. Screens have subpixels. Imaging sensors don't. That's the whole point of the bayer/xtrans mosaic pattern. Technically a 24Mpixel sensor has 24 million dots, some filtered for blue, some for red, and some for green. The image color is then computed from that with demosaicing algorithms. Download RawTherapy and see for yourself; selecting demosaic to 'none'. I also looked at sensors under the microscope.
Ken, here is my understanding of how an image sensor works in a camera, see if I got it wrong. The photo-diode would generate electricity when it hit by light, pretty much like how solar panel works, the electricity then stored in the capacitor in each sub-pixel, after the exposure ended, the image processor of the camera would read the voltage in each sub-pixel on the sensor using the ADC between the image processor and the sensor(some sensor have ADC built-in), on a sensor each pixel consist of 4 sub-pixel which have color filters underneath the micro lens (1 in red, 2 in green and 1 in blue (RGGB)). After reading all the voltage in all the sub-pixel on the sensor, the image processor would record all the sub-pixel readings on the RAW files, or convert the RGGB data into a JPEG file.
The nice thing about cmos sensors, that lovely remanance, waiting for the VC to come along and take it’s temperature, instead of the counter hitting the end stop and saying crap, stop counting I’m hitting 16x1 all the time. No zeroes! No no wait what about all these all zeroes…. Tough dude…. We are full! It’s a moon shot man! Except it isn’t!
By now camera industry should be experimenting with new technology such as SPAD and x1000 more sensitive graphene enhanced CMOS sensors. Unfortunatelly Canon still uses outdated front lit sensors.
Thanks. I was thinking they lit up or something. As for snr For a lens test I was trying to make a value (that I was not sure what to call) using only shutter speed and aperture. I was thinking of calling it camera work or something, because my first thought was to use ev but the charts all required an ISO . I think my test has fundamental flaws ,I will either rethink it or just keep my errors consistent.
Thanks for the listen Ken. I would fly out to you, if you had a class on the inner workings of a camera. So should really think about putting together a workshop
May I ask, what is the purpose of the wiring harness you mentioned and why is it needed? In the BSI model it doesn't seem to have a purpose in the makeup of the image, am I correct or am I missing something? Is the wiring harness only structural?
odd...why would anyone design the conventional sensor with the wiring in between the photo diode and lens? There must be more to it... but glad you made this video. It opened my eyes.
Great info! I took electronic's in high school, we had a small broadcast FM transmitter. I understand this!!! I used to work on the last generation of phototypesetters Autologic, Compugraphic, and even older photomechanical typesetter Mergenthaler VIP or Variable Input Phototypesetter.
Superb!! Classroom Teaching Professor Ken!! Just curious! Who has the Edge in 2018 on BSI Sensor innovation and development? Samsung vs Sony vs or? Who's the Innovation leader within BSI with the most promising scheduled pipeline? Who to look to beyond 2018 in BSI development technology?
Ken, this video is making me second guess my pre-order. I would rather have another stop or so of dynamic range than a few more megapixel. Fuji should be able to promote this and not handicap the camera's dynamic range. A photo with more dynamic range is a lot more pleasing to look at that a couple extra megapixel. Image quality does not look any better than it did with the X-t1. I wonder if it has actually lost dynamic range since the X-t1
G + T + SNR = Exposure looks like a decibel calculation to me, which conceals the fact that the quantities are really being multiplied, which makes more sense.
Your link between radio antennas and the camera sensor appears interesting, but you did not explain it. How does the silicon sensor equate to a radio antenna? I expect that diving into the details will prove interesting.
Yes, they are all photons of different frequencies or wavelengths, but the question is: how does the silicon crystal act as a receiver or antenna? That is where it gets interesting.
Interesting explanation. Could you do similar comparison one explaining how Canon’s Dual Pixel AF system works? Does DPAF somehow prevent Canon from doing a BSI at the same time?
All very interesting. Is the sensor a Bayer or the same technology as the XT2 sensor? The XT2 sensor apparently had advantages for Astro and would record deep space colours including the reds of nebulae. Have we lost that with this new sensor?
Is (effective) ISO affected by pixel photosite size/area? Is the reason why an ISO 100 photo from a 24MP Sony a7iii looks better and less noisy than that on a 24MP Sony a6X00? And/or are there other factors involved?
Really good explanation! Thanks. Still...makes me wonder why the « conventional » sensor was designed that way? There must be a reason since it was designed (unlike the mamalian vs squid retina).
Hay man many great videos, I have yet to learn enough to have a conversation but I would like for your to put me on the right understanding. Keep at it; please say that you are able to sustain an income from all your knowledge.
Wait so why would engineneers place the wiring harness in between the lens array and photo diodes in the first place? The bis arrangement seems so much more logical anyway. Why block the light?
Why does it feel like Fuji is just releasing a 4 year old sensor with a refresh. I hope once raw processors come out and better test are done we will see noted improvement over the X-t2 photos.
ok, so why doesn't a CMOS sensor at least use copper for some marginal improvement? Is there a reason that they're stuck with aluminum? Also, how does sensor heat affect SNR (and noise) in images? Thanks so much, this is a very, very useful video with all the new cameras and sensors being released these days!
As far as I'm aware it's purely a cost thing. If they're making BSI sensors then they're aware of the kinds of clientele that are interested and will care about materials. For the non-BSI sensors, I guess thats less the case.... or maybe Ken is specifically talking about Fuji's BSI sensor....
Why the 2nd ray of light doesn't refract into PD and instead "hits" Al? After all there is that micro lens there. What's the % of loss there? It can't be too high.
Nikon has said they used the BSI sensor for better speed and not better image quality. CMOS is a lot better than the CCD sensors they started with. Aren't the new Nikon Mirrorless BSI?
The cmos says, no problem, I’m getting hotter, but I’m logarithmic in nature, keep going, hit me with those rays, fry my ass, let those stray photons do their magic and warm the dark guys, I can take it, we ain’t melting tungsten yet!
Ken, you mocked Canon for using old sensor in EOS R but in the same time you praise +4 years old X-T3 sensor. I know it's BSI, but still... It's old tech.
From the few comparisons I've seen around, the XT-3 with its BSI sensor doesn't do much if any better than X-T2 in terms of noise anyway (and neither does in detail retention). I'm waiting from a proper raw file comparison but if there's a difference, it must be minimal if not negligible at all. Coming from a 5DIII and thinking about switching to XT-3, I just hope the actual exposure of the image will be boosted a bit - in dpreview tests the 5DIII is about 2/3 stops brighter than X-T2 given same aperture/speed/iso, and add to that, most of the times I have to over compensate the 5DIII exposure for my shots.
Which comparisons? I can't find any reliable data which supports any good XT3/XT2 comparisons....? None which go into any real detail anyway. Like you said, we'll know a bit more when we have some more raw files! That said, all of this doesn't necessarily mean they did it for better IQ. It could be to do with heat management and speed of processing the image at a similar quality whilst dealing with significantly faster output from their new processor. For example, it could be that they found that if they used something like an XT2 sensor with their new processor, the additional power meant heat which meant more noise. As such, they may not have done this to 'improve' image quality, but to speed everything up whilst not taking any hits in quality. Let's be honest - not many people were complaining about quality of image from the XT2 :)
This: bjornmoerman.blogspot.com/2018/09/first-look-review-fujifilm-x-t3-when_9.html And this: ruclips.net/video/Z4AxkbOmfnY/видео.html Both done with Jpeg. If anyone in this world had Affinity Photo (which can surprisingly open X-T3 raw files) and both cameras, that'd be also useful, albeit I won't count on it.
Theoria Apophasis sorry, I meant why are we using that system, you should be able to apply aging at different increments, like ISO 64, 65, if it’s just gain, smaller increments should be easily achieved right?
Go one step later ….. instead of a gain of 1 you have a gain of 100 in the brightest parts of the image. But the blacks are still producing 0.001 and 0.1 respectively. So if you shorten your exposure from 1” to 1/100 then that is 99% of the time there is nothing recorded in the blacks…. Watch a star twinkle!
What if you could turn the CFA Clear with the flick of a switch ! True Monochrome . They all ready have Switchable Smart tent for windows shower doors in colors. Haven't seen red yet. Whoops maybe i should patent this .
Interesting. Based on your explanation, backside illuminated sensor is officially a dumbass name 😂 But I can see how this design allows for better SNR. Would be nice to get an upgrade to the D500 with a BSI sensor. 🙏🏽
Dude, who the hell are you! Thanks for another incredible video! INSANELY INCREDIBLE! However, based on your explanation, @ 7:26 min (ruclips.net/video/J2Ar--bLe6E/видео.html) it seems like you meant to draw the line from Volume (output gain) to ISO, which I understand from your explanation that ISO = applied gain or output gain. Still an excellent video. Thanks a million.
Great explanation. Interestingly the human eye has evolved with the wiring in front of the sensor (retina) whilst squid eyes have the wiring sensibly behind the sensor. We have standard cmos and squid have BSI cmos. Squid vision is many times more sensitive.
Thank YOU👍
My God you are a genius, with my Machining centered education I have a hard time taking photography and radio/television theory and putting them into the same box. Physics wise that is what it is but for me it's hard to see the similarities (no pun intended)...I am going to have to watch this video about 12 times to understand it fully, Thank you for putting out content like this!
more videos like this one, please.
learning is best when the teacher really understands what he‘s talking about.
you have too much FSI (front-side illumination) on your white board.
LOL
The camera was probably set on auto exposure. When his arm is in the picture, the exposure increases.
My bro....this must be the best explanation of anything ever 👍👍👍👍 now I finally get this thing with luxury of details and context. Thanks a lot 🙏🙏🙏🙏
Well done. As a ham, I appreciate the antenna analogy, although most of your subscribers probably do not. In astrophotography cameras, the term gain is commonly used (and not iso.) Mono cameras are preferred for serious deep sky work, although they cost a bit more. Example: ZWO ASI1600mm. Filter wheels are used with LRGB and narrowband filters.
You should teach this in a university. I'm impressed by the amount of knowledge you've packed over the years when it comes to light and transmission. I've learned a lot with this video so thank you! I still can't figure out what difference it makes concretely in terms of image quality not having a chance to compare both outcomes but I'll do some research
5years ago Damn! The new Leica M11 Monochrom has BSI CMOS Monochromatic sensor. Its a 15 stops of dynamic range. And You talked about this 5 years ago. Amazing.
Good watch, one of your best videos
Indeed
Hmmm... great lesson. However, I would've thought the "volume" control on the radio analogy more related to the ISO, whereas the "gain" button more analagous to gaining better Signal-to-Noise. If you turn up the volume on a radio, you increase the volume including the noise, but if you improve the gain, you gain better signal-to-noise.
I would like to add that a monochrome sensor can have 4 times bigger pixel due to without the need of sub-pixel for each color in a pixel.
yup
Harry, great point you made, but would love to have a further detailed explanation on that. Thanks.
Each pixel on a color image sensor consist of 4 sub-pixel with have 3 different color of filter under the micro lens (1 Red, 2 Green, 1 Blue). If the sensor is monochrome, there is no need for sub-pixel for each color, hence the pixel can be bigger (given that the resolution is the same) to collect more light, plus collect even more light due to without the color filter.
For more, read this page:
wilddogdesign.co.uk/blog/monochrome-digital/
Actually no. Screens have subpixels. Imaging sensors don't. That's the whole point of the bayer/xtrans mosaic pattern. Technically a 24Mpixel sensor has 24 million dots, some filtered for blue, some for red, and some for green. The image color is then computed from that with demosaicing algorithms. Download RawTherapy and see for yourself; selecting demosaic to 'none'. I also looked at sensors under the microscope.
Or even just read that page you shared again.
"each 2×2 grid of pixels will have 1 blue pixel, 1 red, and 2 green ones"
Ken, here is my understanding of how an image sensor works in a camera, see if I got it wrong.
The photo-diode would generate electricity when it hit by light, pretty much like how solar panel works, the electricity then stored in the capacitor in each sub-pixel, after the exposure ended, the image processor of the camera would read the voltage in each sub-pixel on the sensor using the ADC between the image processor and the sensor(some sensor have ADC built-in), on a sensor each pixel consist of 4 sub-pixel which have color filters underneath the micro lens (1 in red, 2 in green and 1 in blue (RGGB)). After reading all the voltage in all the sub-pixel on the sensor, the image processor would record all the sub-pixel readings on the RAW files, or convert the RGGB data into a JPEG file.
You are incorrect. Full frame sensors have twice the image quality.
Best,
Tony
Very clear explaination Theoria, what about stacked sensors ?
Just heard the Nikon Rumor that the 780 will have a BSI sensor. This video provided a great explanation of the tech.
The nice thing about cmos sensors, that lovely remanance, waiting for the VC to come along and take it’s temperature, instead of the counter hitting the end stop and saying crap, stop counting I’m hitting 16x1 all the time. No zeroes! No no wait what about all these all zeroes…. Tough dude…. We are full! It’s a moon shot man! Except it isn’t!
Thanks for explaining. So, get the trees out front to the back so they dont block any sunlight headed for the solar panels.
Totally excellent video. It's in the top one percent of your photography related videos.
Well done ," BSI sensor for dummy's" by Angry Photographer. .
Now I understood it, Thanks Ken for being so cool explaning it, I was used to the ASA setup of film times.
You're a god damn genius
Damn Ken, Amazing Info...Your Haters have an agenda because I don't and you have some of the best real info out there.
Thanks Ken, very helpful! Thanks for your time doing these videos...
By now camera industry should be experimenting with new technology such as SPAD and x1000 more sensitive graphene enhanced CMOS sensors. Unfortunatelly Canon still uses outdated front lit sensors.
Do you know why they put the wire in between in the first place. Is it simpler to manufacture or why was this the standard before BSI sensors?
Thanks. I was thinking they lit up or something.
As for snr
For a lens test I was trying to make a value (that I was not sure what to call) using only shutter speed and aperture. I was thinking of calling it camera work or something, because my first thought was to use ev but the charts all required an ISO .
I think my test has fundamental flaws ,I will either rethink it or just keep my errors consistent.
Thanks for the listen Ken. I would fly out to you, if you had a class on the inner workings of a camera. So should really think about putting together a workshop
nobody should wanna get into fixing cameras :)
Great! Now, I can justify that R5 MkII purchase to the Mrs., granted it'll will have a BSI sensor. 🤣
Thank you for this I was very interested in the difference.
Ken, you said the PD's will eventually be gone. What will replace them?
May I ask, what is the purpose of the wiring harness you mentioned and why is it needed? In the BSI model it doesn't seem to have a purpose in the makeup of the image, am I correct or am I missing something? Is the wiring harness only structural?
signal transmission to the AD coverters, ie the image :)
odd...why would anyone design the conventional sensor with the wiring in between the photo diode and lens? There must be more to it... but glad you made this video. It opened my eyes.
existing technology dictated design parameters
Great info! I took electronic's in high school, we had a small broadcast FM transmitter. I understand this!!! I used to work on the last generation of phototypesetters Autologic, Compugraphic, and even older photomechanical typesetter Mergenthaler VIP or Variable Input Phototypesetter.
Very well explained!
thank you so much, fun and instructive. I will check your other videos.
Great video! You're a good teacher 😊
Great explanation. Thank you.
Why did they ever make CMOS? Seems obvious to keep the wiring out of the way?
Are you going to add Foveon sensors, they don't require color filters...
Is a 4-micron pixel pitch still needed in a BSI sensor or can it be smaller?
3.3 seems about it
Superb!! Classroom Teaching Professor Ken!! Just curious! Who has the Edge in 2018 on BSI Sensor innovation and development? Samsung vs Sony vs or? Who's the Innovation leader within BSI with the most promising scheduled pipeline? Who to look to beyond 2018 in BSI development technology?
Why do you think that initial image comparisons with the X-T2 are showing minimal discernible difference?
cause its a smaller pixel pitch
Superb! Well explained 👏🏻
Ken, this video is making me second guess my pre-order. I would rather have another stop or so of dynamic range than a few more megapixel. Fuji should be able to promote this and not handicap the camera's dynamic range. A photo with more dynamic range is a lot more pleasing to look at that a couple extra megapixel. Image quality does not look any better than it did with the X-t1. I wonder if it has actually lost dynamic range since the X-t1
G + T + SNR = Exposure looks like a decibel calculation to me, which conceals the fact that the quantities are really being multiplied, which makes more sense.
i neven even implied it was A + X + Y
0:14 the video barely starts and I am laughing like I'm nuts.
Your link between radio antennas and the camera sensor appears interesting, but you did not explain it. How does the silicon sensor equate to a radio antenna? I expect that diving into the details will prove interesting.
light and EMR are both the same thing,
Yes, they are all photons of different frequencies or wavelengths, but the question is: how does the silicon crystal act as a receiver or antenna? That is where it gets interesting.
Interesting explanation. Could you do similar comparison one explaining how Canon’s Dual Pixel AF system works? Does DPAF somehow prevent Canon from doing a BSI at the same time?
All very interesting. Is the sensor a Bayer or the same technology as the XT2 sensor? The XT2 sensor apparently had advantages for Astro and would record deep space colours including the reds of nebulae. Have we lost that with this new sensor?
thats just the CFA
From what I've seen so far in direct comparison images between the T2 and T3, there's naff all difference.
thank you Ken
Good stuff! Thanks Ken.
4:51 Vigneting on vintage glass explained!😂
How about the Foveon type sensor that Sigma is pushing and i've heard Canon has bought into?
Is (effective) ISO affected by pixel photosite size/area?
Is the reason why an ISO 100 photo from a 24MP Sony a7iii looks better and less noisy than that on a 24MP Sony a6X00?
And/or are there other factors involved?
Another informative video. Thanks mate 👍
I think the Yagi and wave length example is good way to compare the two sensors SNR.
Very interesting, how does old CCD compare? In S/N
Really good explanation! Thanks. Still...makes me wonder why the « conventional » sensor was designed that way? There must be a reason since it was designed (unlike the mamalian vs squid retina).
Hay man many great videos, I have yet to learn enough to have a conversation but I would like for your to put me on the right understanding. Keep at it; please say that you are able to sustain an income from all your knowledge.
Wait so why would engineneers place the wiring harness in between the lens array and photo diodes in the first place? The bis arrangement seems so much more logical anyway. Why block the light?
cause that was the level of technology at the time,...... designs change to improve as technology improves
@@KenTheoriaApophasis ahh gotcha.. Thank you! That makes sense now.
The XT-3 sensor is from sony
When you getting your XT3????
tomorrow
Waiting for ur views…
Great vid thanks
What is the purpose of the Copper in the BSI sensor if the photo diodes are at the top?
both are for signal transmission of the image
Is the X-T3 sensor an ISO invariant sensor like the sensor of the X-T2?
it is , yes
Great video. Wow. Your mind is a treasure trove.
great video, and comments by Several of your Subs
Hey Ken can you please do a video on removing the CFA. Thanks!
Why didn’t they just make BSI sensors in the beginning? What technology recently allowed this to happen? Great video
innovation , also too BSI required new tech for mfg. it..... i heard initially it was really tough designing the BSI for mfg.
Why does it feel like Fuji is just releasing a 4 year old sensor with a refresh. I hope once raw processors come out and better test are done we will see noted improvement over the X-t2 photos.
Thank you!
Excellent video !
Great video thanks Ken.
More exciting to me is coming organic sensors (more dynamic range than the human eye, finally!) and global sensors.
The BSI should be called FSI and vice Versa.
ok, so why doesn't a CMOS sensor at least use copper for some marginal improvement? Is there a reason that they're stuck with aluminum? Also, how does sensor heat affect SNR (and noise) in images? Thanks so much, this is a very, very useful video with all the new cameras and sensors being released these days!
copper dissipates heat better, ....aluminum is a design mfg. choice , apparently easier to build i hear
As far as I'm aware it's purely a cost thing. If they're making BSI sensors then they're aware of the kinds of clientele that are interested and will care about materials. For the non-BSI sensors, I guess thats less the case.... or maybe Ken is specifically talking about Fuji's BSI sensor....
Good explanation. But I thought “Deeper Pixels” were better. :)
Thank you
11:58 maybe a "write-o" 😜
Nice! But, they're really using X-Trans or Bayer, I thank that only CMOS BSI going on Bayer array.
thats just CFA, has nothing to do with the sensor
Why the 2nd ray of light doesn't refract into PD and instead "hits" Al? After all there is that micro lens there. What's the % of loss there? It can't be too high.
loss is about 1 and 1/3 stop + depending
Ah copper field 😎
Nikon has said they used the BSI sensor for better speed and not better image quality. CMOS is a lot better than the CCD sensors they started with. Aren't the new Nikon Mirrorless BSI?
BSi reads faster but also has better native SNR :)
The cmos says, no problem, I’m getting hotter, but I’m logarithmic in nature, keep going, hit me with those rays, fry my ass, let those stray photons do their magic and warm the dark guys, I can take it, we ain’t melting tungsten yet!
Some gals look better with LESS front side illumination. The less the better.
Ken, you mocked Canon for using old sensor in EOS R but in the same time you praise +4 years old X-T3 sensor. I know it's BSI, but still... It's old tech.
From the few comparisons I've seen around, the XT-3 with its BSI sensor doesn't do much if any better than X-T2 in terms of noise anyway (and neither does in detail retention). I'm waiting from a proper raw file comparison but if there's a difference, it must be minimal if not negligible at all. Coming from a 5DIII and thinking about switching to XT-3, I just hope the actual exposure of the image will be boosted a bit - in dpreview tests the 5DIII is about 2/3 stops brighter than X-T2 given same aperture/speed/iso, and add to that, most of the times I have to over compensate the 5DIII exposure for my shots.
thats cause the pixel pitch is smaller
Which comparisons? I can't find any reliable data which supports any good XT3/XT2 comparisons....? None which go into any real detail anyway. Like you said, we'll know a bit more when we have some more raw files!
That said, all of this doesn't necessarily mean they did it for better IQ. It could be to do with heat management and speed of processing the image at a similar quality whilst dealing with significantly faster output from their new processor.
For example, it could be that they found that if they used something like an XT2 sensor with their new processor, the additional power meant heat which meant more noise. As such, they may not have done this to 'improve' image quality, but to speed everything up whilst not taking any hits in quality. Let's be honest - not many people were complaining about quality of image from the XT2 :)
This: bjornmoerman.blogspot.com/2018/09/first-look-review-fujifilm-x-t3-when_9.html
And this: ruclips.net/video/Z4AxkbOmfnY/видео.html
Both done with Jpeg. If anyone in this world had Affinity Photo (which can surprisingly open X-T3 raw files) and both cameras, that'd be also useful, albeit I won't count on it.
Why isn’t the ISO using those steps still, from the film
Days?
digital ISO is applied gain, its not part of exposure, film ISO is silver crystal grain size
Theoria Apophasis sorry, I meant why are we using that system, you should be able to apply aging at different increments, like ISO 64, 65, if it’s just gain, smaller increments should be easily achieved right?
Go one step later ….. instead of a gain of 1 you have a gain of 100 in the brightest parts of the image. But the blacks are still producing 0.001 and 0.1 respectively. So if you shorten your exposure from 1” to 1/100 then that is 99% of the time there is nothing recorded in the blacks…. Watch a star twinkle!
Love it!
I just checked, unfortunately, you do not have any other video about the physics of photography, and that is sad. best.
whats sad is that you have nothing to contribute
@@KenTheoriaApophasis ?
Do you have your HAM radio license?
ohh yes
Nice, me too. Maybe I will catch you on some HF there in Florida. 73.
But what's the purpose of the wired array on the BSI sensor?
The Ggzzbb gathering all those photons lol. 🤣🤣😂😂😂😆😆☝️
Transmits the signal.
The wired array is connected to the ADC for the image processor to readout the sub-pixel brightness readings.
signal transmission
Theoria Apophasis So I guess on the CMOS there is another wired array after the PD then?
What if you could turn the CFA Clear with the flick of a switch ! True Monochrome . They all ready have Switchable Smart tent for windows shower doors in colors. Haven't seen red yet. Whoops maybe i should patent this .
yeah, thatd be nice
I'm alive
Exposure..
Great explanation. 73 ;-)
Interesting. Based on your explanation, backside illuminated sensor is officially a dumbass name 😂 But I can see how this design allows for better SNR. Would be nice to get an upgrade to the D500 with a BSI sensor. 🙏🏽
Scott Miller it is actually acurate name if you understand what is illuminated.
eagleeye photo I just read up on it and now understand, but I was referring to Ken’s interpretation of it that made me laugh 😂
@@millertime6 Yep hi has some funny explanations.😁
Dude, who the hell are you! Thanks for another incredible video! INSANELY INCREDIBLE!
However, based on your explanation, @ 7:26 min (ruclips.net/video/J2Ar--bLe6E/видео.html) it seems like you meant to draw the line from Volume (output gain) to ISO, which I understand from your explanation that ISO = applied gain or output gain. Still an excellent video. Thanks a million.
CQ CQ Great video.