I had a cursory look at the papers and I'm going to share some uniformed doubt. The promise of the perovskite crystal method is to mimic what the human eyeballs do in the retina. That's the first red flag 🚩 (haha) .. but I can't find an actual image of the sensor layer. The 3d renderings I do find though clearly show gaps between the areas of the crystal structure that will be sensitive for various wavelengths. So not *all* red light will get through the crystal.. only the fraction that hits the red sensitive molecules in the layer. That fraction makes its way to the sensor. So what you end up with is gaps around each pixel (which reduces QE) AND gaps with the crystal layer for each colour as well. Gaps in Gaps. So my intuition says there is no way this will be better than a mono camera with separate R G and B filters. Remember, the human eye has cones.. so it's QE is still dictated by the gaps in those colour sensitive cones. The crystal appears to just make little colour sensitive areas (like small cones) over each single pixel. That's not optimal QE.
Foveon did this decades ago using the fact that the depth of light absorption in silicon varies as a function of wavelength. They used a surface blue pixel sensor and buried green and red pixel sensors. Dick Merrill was the primary inventor for this technology. I wrote most of the patents for the X3 technology. See for example US Patent No. 6,727,521. Sigma used Foveon X3 sensors in digital cameras quite a while back.
Thank you for pointing out this technology for camera sensors. That’s indeed very challenging. In photovoltaic this effect of different layers with different spectral sensitivity are called heterogenous solar cells.
Interesting news! Perovskite refers to a certain crystal structure. There are many kinds of different crystals with this structure, they are quite popular for optical applications. There is a lot of research about solar cells based on perovskites. I have worked with Lithium Niobate for optical data storage. It is also used for frequency switches in optical fiber communication. I wouldn't be too optimitistic about having a camera of that type any time soon. First of all, I would like to see their numbers, if they really get a good quantum efficiency for their single layers. The theoretical advantage you showed depends of course on the single layers having a quantum efficiency close to that of a silicon sensor. I think that is where the foveon sensor didn't quite deliver. The other big catch for image photography is the exact spectral sensitivity of each layer. What is so great for the bayer sensor ist, that you can craft the filter on each pixel precisely in the spectral absorption curve. A e.g. "green" filter isn't just green but has a certain spectral curve. So its transmission at e.g. 532 nm and 550nm play a huge role in the color perception. That is probably why e.g. Kodak and Fuji Sensors have their special character. This might be difficult to get with layered sensors. This would be less a concern for astro cameras where one would use multi-line filters together with such a sensor. On the other side, the current monochrome-cameras already off that. The other big caveat: even if all else works out, how are we going to manufacture such sensors with megapixels at scale? All of the fabs are designed for silicon, a whole new manufacturing technology might be necessary. So yes, very intriguing, but certainly not around the corner.
One more point: some years ago Panasonic presented sensor tech which uses several organic layers for a similar effect as this new sensor. I think they even sell a very expensive video camera with it, but it hasn't shown up in the mass market yet.
As far as I know, QE is defined by how many photons are converted into electrons per wavelength. So the filter does not change the QE per pixel at its highest transmittance wavelength. We are throwing away the red light on the blue and green filters, but that has nothing to do with QE. So if there is no red light hitting the silicon on the green pixel it cannot be converted as per the definition of QE. You are talking about spectral sensitivity of the system, but not QE.
Around 20 years ago Sigma was making 3 layer sensors that were around 8 MP, and calling them 24 MP sensors. The images were stunning because no interpolation was needed. But they weren't better in the race than everyone else. Too bad it never really took off.
Interesting tech, thx as always for breaking it down 👍. Question: if the RGB layers are stacked, does that mean that debayering is not required? Would you effectively get a color image SooC?
Foveon did this decades ago by stacking blue green and red sensors using the fact that the absorption of light in silicon varies as a function of wavelength, the blue sensor being on the surface and the red and green sensors being buried below the silicone surface. The principal inventor of this technology was Dick Merrill. I wrote most of the patents to this technology. See for example US Patent No. 6,727,521. Sigma incorporated the Foveon X3 sensors in its digital cameras and later acquired Foveon.
@@TheNarrowbandChannel The only thing that seems to be different is the perovskite material and its photon capture physics. My comment, as well as previous comments by others, was to the effect that the video seemed to imply that the concept of vertical color photon capture was something new, which it is not. Not enough information to determine yet whether properties such as photon capture efficiency, signal-to-noise ratio, and manufacturability will prove to make this a better technology than the Foveon X3 vertical imager
@@KennethDAlessandro I did say in the video that there have been other (many in fact) attempts at this. What is better about this is that light is not absorbed across the entire spectrum as it is in Foveon tech. Which is a dead end and has been for some time.
7 layer(or more) sensor for narrow band, wow! Based on how you discribe this it seems that you would be capturing all those wavelengths at the same time! WOW! One hour yields all your data channels at once! WOW! This could be a game changer in so many ways, scientificly and for us hobbiests.
Sweet! This reminds me of the solar modules used on the spirit and opportunity rovers known as multi conjunction cells. From what I remember reading, it cost over $1 million per square centimeter to make those. Of course Opportunity lasted for like almost a decade when it was only supposed to live 90 days. Great video as usual, very much enjoyed!
Excellent video! I hop you are well. Regarding the low number of research papers being published in journals, Yes, CoVid-19 has impacted research & development, but I wonder if the incentive to fund R&D within the manufacturing sector has dropped for the following additional reasons: (a) manufacturers have made a massive investment in the equipment & training for the efficient & profitable mass production of Bayer Array CMOS sensors...which means they need to pay for the preset technology before considering adopting significantly different sensors, and (b) the Bayer Array sensors are quite profitable, and what research is being done is targeted toward the existing technology (this happened with CCD sensor technology, until CMOS sensors became viable & profitable)
I think a lot of research went into TDI cameras and sensors for automated vision systems. For consumer applications, the BSI sensors were stacked with memory and amplifier chips… research went thus into 3D packing for improvements on the system level. There was some research, however, for curved chips by Sony (I believe) to compensate for the field curvature of optics. In the end optics got so much better with the “CODE V optimize button” and new low dispersion materials that curved chips were not required anymore.
I’m not sure about the increase in light due to removing the Bayer filter. I guess the question would be % of available light absorbed in total by the sensor, loss through the stacking process, and overall efficiency. However, if 90% efficiency is achieved, and the sensors can be stacked, and their *size* increased then much better fidelity could be achieved with a total reduction in overall noise. That would be tremendous.
4:08 i think a form of stacked sensor was already made around 20 years ago in consumer stuff and went back even to the 70s. I think it was called the x3. Sigma has a full frame sensor they want to being out with the tech and i think fuji already made a stacked sensor camera a few years ago.
Holding on yo my MFT gear! It seems like these types of improvements would make a Micro Four Thirds sensor closer to a full frame sensor of today. (Or am I misinterpreting this?)
Foveon was using several layers of silicon. Basically, they measured how deep a photon enters the silicon. This depends on the wave length and hence from the color. But they were not very good at distinguishing colors and produced a lot of noise. So they were very nice in the resolution department, but didn't give better ISO.
I enjoyed your video and thorough research. Thank you! I have a nice 30”x40” print hanging on my wall taken & printed from Fuji Velvia 6x7cm format film. Loved the Velvia colors and saturation!
Perovskite is being developed mainly by China at the moment for the next generation of solar panels. Even though the efficiency is greatly improve its main down side is its stability and currently wouldn't last as long as current solar panels. I'm wondering if these sensors would also have a similar issue.
Given that solar panels are bombarded by the sun harmful rays constantly I can see how they would deteriorate. A camera sensor though is not exposed but for very short periods so this shorter lift may not factor in at all. But interesting.
AND what if the star light actually falls only on the blue pixel but not the green or red, because these are physically adjacent. So you might not capture certain detail on the color camera because only light of that color can be detected by that pixel at that location. Very interesting stuff, thx.
I dont see why instead of putting rgb bayer matrix, have a HaO3SulfHBeta matrix. I understand that would be incredibly niche but the astro market would eat it up. Also perhaps a bayer array without 2 greens would be nice.
great video well researched and like all things transitioning from lab to production lots to tweek but without basic research no progress is made. thankyou for the vid God Bless you and your family regards Athol
Strangely enough, the Pentax K3 mark iii has a better iso performance to its monochrome version camera, Pentax K3 mark iii Monochrome. So in this particular case the colour sensor from purely the prospective of iso performance at different ISO levels performance better, in respect of noise at the various ISO settings.
Pentax must have done something dramatically wrong with that ones firmware, or it's not the same sensor. The color version uses the IM571 sensor and the mono version of it has vastly superior performance as many many astronomers have confirmed. It's a very popular sensor in its mono version in astrophotography. A ton of data is out there about it.
Yes I honestly don't know why this is the case, as you said it could be a different sensor. And I understand what you mentioned about the monochrome sensor and reviews of it with astrophotographers, the whole thing surprises me too.
This sort of thing has been done before with the Foveon sensor that Sigma used back in the early 2000. It used a stacked RGB sensor stack. en.wikipedia.org/wiki/Foveon_X3_sensor
Cool tech! Hopefully it becomes a real product soon. In theory it should also fix (or strongly reduce) the problem of moiré that bayer sensors have. You can bypass this problem by having a very high resolution sensor, or use some softening filter, but that’s not ideal. Since both web compressed videos and stills are also often compressed to 4:2:0 chroma sub sampling, the actual resolution you are watching is probably very much lower than the stated sensor resolution. At least a better sensor would eliminate one of the color resolution steps involved in a normal publishing workflow. (You can compensate for this by over sampling.) Since the computer also doesn’t need to invent most of the pixel data based on the surrounding pixels, decoding the file should also be less compute intensive. So higher quality image, better low light sensitivity, no moiré, faster decode. I wonder if this tech also have a downside. Perhaps the actual color reproduction/separation is worse somehow? Lower dynamic range? Slower readout speed? A lot higher production cost?
Nice to see you back😊 A little question, Are you sure one shoot camera's filters absorb certain light frequencies and not reflect it back? Also how about having WRGB pattern instead of RGGB pattern? How about HaO3O3Ha pattern?😊
These crystals are being tried to improve efficiency of solar cells over last several years. Not sure whether they could address the stability and ruggedness of these crystals and an affordable solar cell based on this is available as yet.
Panasonic had video cameras that used a prism to split the light of RGB into 3 different sensors, wasn't really any better than competing video cameras
In the 90s 3 chip video cameras were king of the hill. They had dramatically better resolution and low light performance compared to single ship cameras of the same era. You should look up 3 Chip cameras. They were pretty neat.
Imagine the OM 1 mkvii 20mgpxl, performs like a 30mgpxl and can shoot noise free at 3200 iso. And the internet would still compain its not 40 at 6400 😂😂😂😂 great to see sensor tech improvements. Honestly were so spolit as photographers and videographers these days.
Hi mate. I got into NB after following your vids. I've just done a vid on my H-alpha project- mapping the southern skies in high res. Please have a look and see what you think. Thanks 👍
I don't think it was all covid that hurt research on sensors. The camera market has crashed over the past few years with the rise of decent camera's in smart phones. I'm thinking there's less money to be made, hence the drop in research. I've been hearing that the market is starting to recover a little bit, so hopefully the pace of innovation picks up. Another cool technology that I came across is color routers.. very cool stuff. Thanks for the video :)
Camera companies crashed 10 years ago. There are actually more cameras being made today then ever. Most modern cars have a dozen cameras in them. So demand for imaging sensors is really on the rise now decline. I think 2 years ago a did a video and mention color router sensors.
I am about to do just the same :) I don't think these sensors are available any time soon. I would be surprised if they were available by the time you look in to replacing your new camera even.
Yes if you just bought a new camera do not sweet it. It's probably going to be a while. These sensors will most likely show up in cell phones 2 years before cameras.
Perovskite materials have been studied intensively for solar cells (photovoltaic) for decades. It is not new in the sense of converting light into electric potential.
@@TheNarrowbandChannel have you checked pixel shifting? Can increase pixel count, reduce noise and increase color depths (Nikon Zf, I have not played with it yet… have a driver issue in pc lol )
Technically you can divide light into a infinite number of parts by its spectrums. But that goes beyond the time limit of this video. Or are you saying the video is not basic enough for you?
First, learn a bit about the luminosity function - it will tell you that your '33%' is nonsense when it comes to normal photography cameras (which is a different use case to astro cameras). The bayer sensor is very clever, even if it looks simple. Calling it 'red green and blue' misunderstands how it works. What it actually is, is a luminance channel and two chroma channels. The luminance channel gets half the pixels, and covers most of the visible spectrum. Stacked pixel sensors are not new - they've been around a while, called a Foveon. The problem with these has been extracting the charge from the three layers, and also implementing correlated double sampling, essential to eliminate pattern noise. It results in a lower QE than traditional sensors and therefor more noise. As you note, the direct QE of silicon is in the 90's, so even if perovskites can be better, it's not by much. I would bet that extracting the charge would be even more difficult. There is no perovskites chip tech, so there would have to be an interface (actually three) in each pixel to the readout circuitry, and that sounds hard. Then you couldn't manufacture it on conventional fab lines. They say the best the can do is a 100k pixel sensor, so there's a load of development and production engineering to do - and who's going to pay for that? If it ever does come to something, will it outperform silicon by enough to be competitive. I would doubt it. It's a non-starter.
Even by your logic luminance is half. I can tell you that most of thus is just forum nonsense amd you wont find any scientific pier reviewed papers that support this. Look up throughput of sensors. Lastly foveon is very different as has been explained by others in comments here.
@@TheNarrowbandChannel Sure it's half (actually a bit more when you work out the transmission curves that real CFAs use, the designers aren't as dumb as you think). As for forum nonsense and peer reviewed papers, that's just a deflection from your inability to actually answer the points that I made. If peer reviewed authority is important to you, I should let you know I'm a retired university professor with a research specialism in sensing systems and over 150 peer reviewed papers to my name. You won't find any peer reviewed papers about productionization of these (and other radical sensor ideas) because it hasn't happened yet, if it ever does happen. It will occur in the commercial domain, and they won't publish (if they ever do) until they have the details worked out and patent protection in place. I was simply letting you know what are the likely pitfalls before this makes it to a commercial product, if it ever does. There are loads of peer reviewed papers for new concepts which were totally impracticable and never made it to production, let alone succeeded in the market. How's the Panasonic organic sensor going? If you want a bit more detail on the reasoning behind everything I said, I'm happy to oblige, if you're willing to learn rather than just dismissing things that you clearly don't fully understand.
That's like Fovian they got the patent & were bout out by Sigma years ago there have been a number of Sigma cameras. perovskites aren't that efficient.
Let me know when you do a review of this tech. Until then this is just a "look at me talk about maybe" video. The internet is full of them now. None of you can wait for anything. Need to know now even if you have to make up shit.
Just an fyi on your video, this technology is not new at all. It was developed and implemented in the sigma SD9 dslr under the foveon name. You can look up its history in Wikipedia. Now this may be a new thing coming to astrophotography but the tech has been around for over 20years. Just so you know. en.wikipedia.org/wiki/Foveon_X3_sensor
I had a cursory look at the papers and I'm going to share some uniformed doubt. The promise of the perovskite crystal method is to mimic what the human eyeballs do in the retina. That's the first red flag 🚩 (haha) .. but I can't find an actual image of the sensor layer. The 3d renderings I do find though clearly show gaps between the areas of the crystal structure that will be sensitive for various wavelengths. So not *all* red light will get through the crystal.. only the fraction that hits the red sensitive molecules in the layer. That fraction makes its way to the sensor. So what you end up with is gaps around each pixel (which reduces QE) AND gaps with the crystal layer for each colour as well. Gaps in Gaps. So my intuition says there is no way this will be better than a mono camera with separate R G and B filters. Remember, the human eye has cones.. so it's QE is still dictated by the gaps in those colour sensitive cones. The crystal appears to just make little colour sensitive areas (like small cones) over each single pixel. That's not optimal QE.
I also believe that Foveon sensors currently do this, but I don't know if they have superior lower light performance.
CMOS sensors also have gaps. Hence they put micro lenses over them. How wide the gaps are we will have to see yet.
Foveon did this decades ago using the fact that the depth of light absorption in silicon varies as a function of wavelength. They used a surface blue pixel sensor and buried green and red pixel sensors. Dick Merrill was the primary inventor for this technology. I wrote most of the patents for the X3 technology. See for example US Patent No. 6,727,521. Sigma used Foveon X3 sensors in digital cameras quite a while back.
It's not the same tech. Like making a cake with sugar vs salt. Foveon was a dead end 10 years ago.
Thank you for pointing out this technology for camera sensors. That’s indeed very challenging. In photovoltaic this effect of different layers with different spectral sensitivity are called heterogenous solar cells.
Five years away and already my bank manager is in panic mode! Thanks, found this hugely interesting.
@@peterlaubscher3989 nothing wring in holding up a bank to get funds for new astro gear
Interesting news! Perovskite refers to a certain crystal structure. There are many kinds of different crystals with this structure, they are quite popular for optical applications. There is a lot of research about solar cells based on perovskites. I have worked with Lithium Niobate for optical data storage. It is also used for frequency switches in optical fiber communication.
I wouldn't be too optimitistic about having a camera of that type any time soon. First of all, I would like to see their numbers, if they really get a good quantum efficiency for their single layers. The theoretical advantage you showed depends of course on the single layers having a quantum efficiency close to that of a silicon sensor. I think that is where the foveon sensor didn't quite deliver.
The other big catch for image photography is the exact spectral sensitivity of each layer. What is so great for the bayer sensor ist, that you can craft the filter on each pixel precisely in the spectral absorption curve. A e.g. "green" filter isn't just green but has a certain spectral curve. So its transmission at e.g. 532 nm and 550nm play a huge role in the color perception. That is probably why e.g. Kodak and Fuji Sensors have their special character. This might be difficult to get with layered sensors. This would be less a concern for astro cameras where one would use multi-line filters together with such a sensor. On the other side, the current monochrome-cameras already off that.
The other big caveat: even if all else works out, how are we going to manufacture such sensors with megapixels at scale? All of the fabs are designed for silicon, a whole new manufacturing technology might be necessary.
So yes, very intriguing, but certainly not around the corner.
One more point: some years ago Panasonic presented sensor tech which uses several organic layers for a similar effect as this new sensor. I think they even sell a very expensive video camera with it, but it hasn't shown up in the mass market yet.
The organic sensor thing was such a flop. Even the nam sucks lol.
Lithiumniobate is the most promising material as active material in silicon photonics for high data rate transceivers…
As far as I know, QE is defined by how many photons are converted into electrons per wavelength. So the filter does not change the QE per pixel at its highest transmittance wavelength. We are throwing away the red light on the blue and green filters, but that has nothing to do with QE. So if there is no red light hitting the silicon on the green pixel it cannot be converted as per the definition of QE. You are talking about spectral sensitivity of the system, but not QE.
Around 20 years ago Sigma was making 3 layer sensors that were around 8 MP, and calling them 24 MP sensors. The images were stunning because no interpolation was needed. But they weren't better in the race than everyone else. Too bad it never really took off.
Totally different tech.
Very exciting. I was out last night observing. I'm excited to get a camera and start imaging.
It looks like the old Foveon system
Very different.
Interesting tech, thx as always for breaking it down 👍. Question: if the RGB layers are stacked, does that mean that debayering is not required? Would you effectively get a color image SooC?
That is correct i believe
Exciting video, thx for covering this. Hope it comes to ZWO ASI cameras soon.
I'm pretty sure that Sony uses 4 pixels per pixel and when you clear image zoom all the way in you are seeing Bayer pattern.
Foveon did this decades ago by stacking blue green and red sensors using the fact that the absorption of light in silicon varies as a function of wavelength, the blue sensor being on the surface and the red and green sensors being buried below the silicone surface. The principal inventor of this technology was Dick Merrill. I wrote most of the patents to this technology. See for example US Patent No. 6,727,521. Sigma incorporated the Foveon X3 sensors in its digital cameras and later acquired Foveon.
Its actually very different in how it works. Like making a cake with sugar instead of salt.
@@TheNarrowbandChannel The only thing that seems to be different is the perovskite material and its photon capture physics. My comment, as well as previous comments by others, was to the effect that the video seemed to imply that the concept of vertical color photon capture was something new, which it is not. Not enough information to determine yet whether properties such as photon capture efficiency, signal-to-noise ratio, and manufacturability will prove to make this a better technology than the Foveon X3 vertical imager
@@KennethDAlessandro I did say in the video that there have been other (many in fact) attempts at this.
What is better about this is that light is not absorbed across the entire spectrum as it is in Foveon tech. Which is a dead end and has been for some time.
7 layer(or more) sensor for narrow band, wow! Based on how you discribe this it seems that you would be capturing all those wavelengths at the same time! WOW! One hour yields all your data channels at once! WOW! This could be a game changer in so many ways, scientificly and for us hobbiests.
7 Layer would be cool but it's not reality yet even in the lab. But this tech makes that possible in the future.
Sweet! This reminds me of the solar modules used on the spirit and opportunity rovers known as multi conjunction cells. From what I remember reading, it cost over $1 million per square centimeter to make those. Of course Opportunity lasted for like almost a decade when it was only supposed to live 90 days.
Great video as usual, very much enjoyed!
Interesting!
Excellent video! I hop you are well. Regarding the low number of research papers being published in journals, Yes, CoVid-19 has impacted research & development, but I wonder if the incentive to fund R&D within the manufacturing sector has dropped for the following additional reasons:
(a) manufacturers have made a massive investment in the equipment & training for the efficient & profitable mass production of Bayer Array CMOS sensors...which means they need to pay for the preset technology before considering adopting significantly different sensors, and
(b) the Bayer Array sensors are quite profitable, and what research is being done is targeted toward the existing technology (this happened with CCD sensor technology, until CMOS sensors became viable & profitable)
I think a lot of research went into TDI cameras and sensors for automated vision systems. For consumer applications, the BSI sensors were stacked with memory and amplifier chips… research went thus into 3D packing for improvements on the system level. There was some research, however, for curved chips by Sony (I believe) to compensate for the field curvature of optics. In the end optics got so much better with the “CODE V optimize button” and new low dispersion materials that curved chips were not required anymore.
I’m not sure about the increase in light due to removing the Bayer filter. I guess the question would be % of available light absorbed in total by the sensor, loss through the stacking process, and overall efficiency. However, if 90% efficiency is achieved, and the sensors can be stacked, and their *size* increased then much better fidelity could be achieved with a total reduction in overall noise. That would be tremendous.
4:08 i think a form of stacked sensor was already made around 20 years ago in consumer stuff and went back even to the 70s. I think it was called the x3.
Sigma has a full frame sensor they want to being out with the tech and i think fuji already made a stacked sensor camera a few years ago.
Totally different tech. In hindsight i should have said that in the video
@@TheNarrowbandChannel definitely, X3 wasn't really CMOS, there was something different about it, I think in the way color was read.
@@deltacx1059 Some other commenters here explain the difference. They are like salt and sugar.
Love these longer videos man ❤️
Glad you like them!
Holding on yo my MFT gear! It seems like these types of improvements would make a Micro Four Thirds sensor closer to a full frame sensor of today. (Or am I misinterpreting this?)
Well every format will eventually he able to see this improvement
Hi Ben,
Maybe it is my lack of sensor knowledge. I wonder what the sensor technology makes it different from Foveon as it also has 3 layers.
Foveon was using several layers of silicon. Basically, they measured how deep a photon enters the silicon. This depends on the wave length and hence from the color. But they were not very good at distinguishing colors and produced a lot of noise. So they were very nice in the resolution department, but didn't give better ISO.
Yes quite different and Foveon was a bit of a dead end.
You look great mate! Thanks for the informative and entertaining video!
I enjoyed your video and thorough research. Thank you! I have a nice 30”x40” print hanging on my wall taken & printed from Fuji Velvia 6x7cm format film. Loved the Velvia colors and saturation!
Wonderful!
How does this differ from Foveon sensors?
See other comments its very different
Wasn't that basically the Faveon sensor?
Nope see other comments.
Perovskite is being developed mainly by China at the moment for the next generation of solar panels. Even though the efficiency is greatly improve its main down side is its stability and currently wouldn't last as long as current solar panels. I'm wondering if these sensors would also have a similar issue.
Given that solar panels are bombarded by the sun harmful rays constantly I can see how they would deteriorate. A camera sensor though is not exposed but for very short periods so this shorter lift may not factor in at all. But interesting.
@@TheNarrowbandChannel yeah that makes sense. Exciting if it can be developed.
AND what if the star light actually falls only on the blue pixel but not the green or red, because these are physically adjacent. So you might not capture certain detail on the color camera because only light of that color can be detected by that pixel at that location. Very interesting stuff, thx.
Most optics are not good enough to do that.
Interesting stuff! Looking good sir!
I dont see why instead of putting rgb bayer matrix, have a HaO3SulfHBeta matrix. I understand that would be incredibly niche but the astro market would eat it up. Also perhaps a bayer array without 2 greens would be nice.
great video well researched and like all things transitioning from lab to production lots to tweek but without basic research no progress is made.
thankyou for the vid God Bless you and your family regards Athol
Glad you enjoyed it!
Strangely enough, the Pentax K3 mark iii has a better iso performance to its monochrome version camera, Pentax K3 mark iii Monochrome. So in this particular case the colour sensor from purely the prospective of iso performance at different ISO levels performance better, in respect of noise at the various ISO settings.
Pentax must have done something dramatically wrong with that ones firmware, or it's not the same sensor. The color version uses the IM571 sensor and the mono version of it has vastly superior performance as many many astronomers have confirmed. It's a very popular sensor in its mono version in astrophotography. A ton of data is out there about it.
Yes I honestly don't know why this is the case, as you said it could be a different sensor. And I understand what you mentioned about the monochrome sensor and reviews of it with astrophotographers, the whole thing surprises me too.
good to see you, was thinking about you and your channel few days ago and there it is ... video popping out on yt 😂 greetings and blessings from uk 🎉
Welcome back!
Excellent explanation......I like the new tech but its still too far away. I want round sensors now.
This sort of thing has been done before with the Foveon sensor that Sigma used back in the early 2000. It used a stacked RGB sensor stack. en.wikipedia.org/wiki/Foveon_X3_sensor
It is a bit different as it uses the depth of the photon absorption in silicon, not seperate color absorbing layers.
Foveon is very different and was a big dead end in tech.
Cool tech! Hopefully it becomes a real product soon.
In theory it should also fix (or strongly reduce) the problem of moiré that bayer sensors have. You can bypass this problem by having a very high resolution sensor, or use some softening filter, but that’s not ideal.
Since both web compressed videos and stills are also often compressed to 4:2:0 chroma sub sampling, the actual resolution you are watching is probably very much lower than the stated sensor resolution. At least a better sensor would eliminate one of the color resolution steps involved in a normal publishing workflow. (You can compensate for this by over sampling.)
Since the computer also doesn’t need to invent most of the pixel data based on the surrounding pixels, decoding the file should also be less compute intensive.
So higher quality image, better low light sensitivity, no moiré, faster decode. I wonder if this tech also have a downside. Perhaps the actual color reproduction/separation is worse somehow? Lower dynamic range? Slower readout speed? A lot higher production cost?
It looks like they claim lower production costs.
Nice to see you back😊 A little question, Are you sure one shoot camera's filters absorb certain light frequencies and not reflect it back? Also how about having WRGB pattern instead of RGGB pattern? How about HaO3O3Ha pattern?😊
Whatever happened to Gigajot? Seemed so promising just a few years ago, but the company appears to have been disbanded.
I think they were bought up.
What about a new technology where one pixel only at one layer can detect all colors, that’s what we need.
These crystals are being tried to improve efficiency of solar cells over last several years. Not sure whether they could address the stability and ruggedness of these crystals and an affordable solar cell based on this is available as yet.
It is cheaper then silica and requires less power. Also since the sensor of a camera is not out in blazing sun all day it will last much longer.
Panasonic had video cameras that used a prism to split the light of RGB into 3 different sensors, wasn't really any better than competing video cameras
In the 90s 3 chip video cameras were king of the hill. They had dramatically better resolution and low light performance compared to single ship cameras of the same era. You should look up 3 Chip cameras. They were pretty neat.
@@TheNarrowbandChannel I own one and a similar vintage sony and canon, compared them side by side and saw little improvement
Isn't that like the Sigma foveon technology?
Sorry, I see other people made the same comment...
No very different.
Imagine the OM 1 mkvii 20mgpxl, performs like a 30mgpxl and can shoot noise free at 3200 iso. And the internet would still compain its not 40 at 6400 😂😂😂😂 great to see sensor tech improvements.
Honestly were so spolit as photographers and videographers these days.
I agree 100%
Fascinating, thank you 😊
Hi mate. I got into NB after following your vids. I've just done a vid on my H-alpha project- mapping the southern skies in high res. Please have a look and see what you think. Thanks 👍
I will check it out, where is it on line?
I don't think it was all covid that hurt research on sensors. The camera market has crashed over the past few years with the rise of decent camera's in smart phones. I'm thinking there's less money to be made, hence the drop in research. I've been hearing that the market is starting to recover a little bit, so hopefully the pace of innovation picks up.
Another cool technology that I came across is color routers.. very cool stuff.
Thanks for the video :)
Camera companies crashed 10 years ago. There are actually more cameras being made today then ever. Most modern cars have a dozen cameras in them. So demand for imaging sensors is really on the rise now decline.
I think 2 years ago a did a video and mention color router sensors.
:( i just bought a 533 because of your channel :)
I am about to do just the same :) I don't think these sensors are available any time soon. I would be surprised if they were available by the time you look in to replacing your new camera even.
@@peterherth7379 thankfully im very happy with it so far. Very good recommendation ty again :)
Yes if you just bought a new camera do not sweet it. It's probably going to be a while. These sensors will most likely show up in cell phones 2 years before cameras.
Perovskite materials have been studied intensively for solar cells (photovoltaic) for decades. It is not new in the sense of converting light into electric potential.
Sound is similar, today’s digital music is actually only 4 channels. records produced 7 channels, 100% better quality.
Curious cool…
I thought that sensor was already here… in DSLR very limited or those fancy 50k$
No they have a way to go. There are other ways of doing this though.
@@TheNarrowbandChannel have you checked pixel shifting? Can increase pixel count, reduce noise and increase color depths (Nikon Zf, I have not played with it yet… have a driver issue in pc lol )
@@ricardoabh3242 My cameras do it. Have for about 6-7 years now.
@@ricardoabh3242 You can also drizzle a stack.
Sigma has used this sensor for a long time...Foveon.
Its totally different. Like salt compared to sugar
@@TheNarrowbandChannel I mean it's the same basic idea so I wouldn't say it "totally different". The underlyng material and tech is different.
@@robj144 Foveon absorbs light across the entire spectrum at each layer. So it is almost impossible to get its Qe up above 50%
Sounds like math!
You lost me at “all 3 wavelengths of light”.
Technically you can divide light into a infinite number of parts by its spectrums. But that goes beyond the time limit of this video. Or are you saying the video is not basic enough for you?
This makes me want to toss my MC camera and go to mono. ;)
mono is awesome.
First, learn a bit about the luminosity function - it will tell you that your '33%' is nonsense when it comes to normal photography cameras (which is a different use case to astro cameras). The bayer sensor is very clever, even if it looks simple. Calling it 'red green and blue' misunderstands how it works. What it actually is, is a luminance channel and two chroma channels. The luminance channel gets half the pixels, and covers most of the visible spectrum. Stacked pixel sensors are not new - they've been around a while, called a Foveon. The problem with these has been extracting the charge from the three layers, and also implementing correlated double sampling, essential to eliminate pattern noise. It results in a lower QE than traditional sensors and therefor more noise. As you note, the direct QE of silicon is in the 90's, so even if perovskites can be better, it's not by much. I would bet that extracting the charge would be even more difficult. There is no perovskites chip tech, so there would have to be an interface (actually three) in each pixel to the readout circuitry, and that sounds hard. Then you couldn't manufacture it on conventional fab lines. They say the best the can do is a 100k pixel sensor, so there's a load of development and production engineering to do - and who's going to pay for that? If it ever does come to something, will it outperform silicon by enough to be competitive. I would doubt it. It's a non-starter.
Even by your logic luminance is half. I can tell you that most of thus is just forum nonsense amd you wont find any scientific pier reviewed papers that support this. Look up throughput of sensors.
Lastly foveon is very different as has been explained by others in comments here.
@@TheNarrowbandChannel Sure it's half (actually a bit more when you work out the transmission curves that real CFAs use, the designers aren't as dumb as you think). As for forum nonsense and peer reviewed papers, that's just a deflection from your inability to actually answer the points that I made. If peer reviewed authority is important to you, I should let you know I'm a retired university professor with a research specialism in sensing systems and over 150 peer reviewed papers to my name. You won't find any peer reviewed papers about productionization of these (and other radical sensor ideas) because it hasn't happened yet, if it ever does happen. It will occur in the commercial domain, and they won't publish (if they ever do) until they have the details worked out and patent protection in place. I was simply letting you know what are the likely pitfalls before this makes it to a commercial product, if it ever does. There are loads of peer reviewed papers for new concepts which were totally impracticable and never made it to production, let alone succeeded in the market. How's the Panasonic organic sensor going? If you want a bit more detail on the reasoning behind everything I said, I'm happy to oblige, if you're willing to learn rather than just dismissing things that you clearly don't fully understand.
You did not provide any peer reviewed evidence. (still waiting)
That's like Fovian they got the patent & were bout out by Sigma years ago there have been a number of Sigma cameras.
perovskites aren't that efficient.
Foveon uses silica. Its fundamentally very different.
Let me know when you do a review of this tech. Until then this is just a "look at me talk about maybe" video. The internet is full of them now. None of you can wait for anything. Need to know now even if you have to make up shit.
My channel covers a lot of tech.
@WiFuzzy you sound like an entitled jerk. The universe doesn't revolve around you.
Just an fyi on your video, this technology is not new at all. It was developed and implemented in the sigma SD9 dslr under the foveon name. You can look up its history in Wikipedia. Now this may be a new thing coming to astrophotography but the tech has been around for over 20years. Just so you know.
en.wikipedia.org/wiki/Foveon_X3_sensor
Sigmas tech is not the same. Its as different as salt and sugar.