How cameras see color: RGB vs RGBW, Bayer vs Fuji X-Trans, Purple isn't real?!

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 546

  • @arijitghosh6378
    @arijitghosh6378 5 лет назад +35

    This is the reason why I love this channel. No other youtube channel really digs deep into the science of photography. Great stuff! Keep 'em coming.

  • @dmphotography.prints
    @dmphotography.prints 5 лет назад +52

    Been on this planet 47 years, countless hours in science classes... and in 21:04 i finally "get light" and how it relates to photography (which is a bonus)!!! I'm voting Tony Northrup in 2020

  • @Bloggerky
    @Bloggerky 5 лет назад +87

    Keep bringing the science. We nerds salute you!

  • @jackharwick2080
    @jackharwick2080 5 лет назад +48

    Best presentation on this subject EVER. and I have been a photographer for seventy years..

  • @MusicOfDreamweaver
    @MusicOfDreamweaver 5 лет назад +59

    The nerd-videoes are the best. Also I now understand Moiré.

  • @Krekkertje
    @Krekkertje 5 лет назад +1

    What impressed me the most about this video was the fact that you were able to draw an X-Trans sensor array by heart. I've been a fuji geek for about three years now and I can't do that.

  • @barashkaz
    @barashkaz 5 лет назад +68

    Yeh, it gets real messy / interesting once you start digging deep ... the deeper you go, the more freaky things get. Thank you for the great video, keep doing nerdy stuff.

  • @Camrographer
    @Camrographer 5 лет назад

    I know this information is accessible in various places. But having you aggregate and present it in an easy to comprehend manner does the community a great service. An educated community elevates the field as whole.

  • @jonrolfson1686
    @jonrolfson1686 5 лет назад +2

    Nicely done. I had seen representations of the Bayer charts before, but never understood why half of the photo-sites were green. Having provided this clear explanation of light, how we perceive it, and of the state of technology for the handling of light, you deserve a relaxing sit-down with something purple, cool and bubbly.

  • @lasignorapianissima
    @lasignorapianissima 5 лет назад +15

    Pluto isn't a planet, purple doesn't exist... Life is ruthless! 😭

  • @Edgarbopp
    @Edgarbopp 5 лет назад +1

    Fantastic Tony

  • @lylestavast7652
    @lylestavast7652 5 лет назад

    That's a great job of explaining it all. Can't tell you how many people think each pixel is a single photosite, and your coverage of that point is superb...

  • @tubemapper
    @tubemapper 5 лет назад +30

    Haha love the big NERDY WARNING!! I enjoy these videos, looking deeper into the workings of our cameras. Thanks for all the effort and research put into these, it's appreciated ☺

  • @PaulStephenson003
    @PaulStephenson003 5 лет назад

    Some folks will be wowed by the end result > the what (the photo/video), some folks will be wowed by the how (the tech) and last some folks will be wowed by the why (the science/explanation behind the tech). It's great you create content that appeals to all three. Well done.

  • @angelmathew7275
    @angelmathew7275 4 года назад

    Loved the way you linked the concept of cones and rods in eyes processing colors to the way sensors capture colors.

  • @lazyastronomy3348
    @lazyastronomy3348 5 лет назад +5

    Great video, Tony! Allow me to nerd back on one important area this pertains too, astrophotography. As an astrophotographer, I've seen this explanation many times, but it was neat to see it from a daylight shooter's perspective. The Bayer matrix and the reduced light gathering per photosite is the main reason why most astrophotographers switch to mono cameras eventually. At that point, we put red, green, and blue filters individually over the mono sensor to gather more light for each channel and combine them in post. Since our targets don't move quickly, this can allow for much greater resolution on the final image and less total imaging time.
    We're still blocking light when using those filters, so we also shoot luminance, which is mostly unfiltered, which is why I thought the RGBW matrix you described was really interesting. It essentially allows the color data to be mixed with straight luminance at higher sensitivity, which is what the best galaxy phototographers do now, but all at once! You still have the resolution loss, but you get a big light gathering/detail/noise reduction boost. I could see those sensors being the #1 choice of CMOS astrophotographers once they get built into an astro version with cooling and computer control.

  • @itsalex.720
    @itsalex.720 5 лет назад +1

    I love your mindful decision to do this in a sit down chat/whiteboard format. It really connects. Keep it up T&C!

  • @Reaper89i
    @Reaper89i 5 лет назад +1

    Very good video, Tony !

  • @marcelrothmund2447
    @marcelrothmund2447 5 лет назад +4

    Thank you Tony, for making complex information understandable! Great video.

  • @raphaellencrerot
    @raphaellencrerot 5 лет назад +6

    Very impressed by your art of teaching such technical subject !
    Keep going.

  • @JuanMartinezJones
    @JuanMartinezJones 5 лет назад +9

    This video was just incredible. As a biologist I am really proud to see you accurately using biological examples to illustrate the ability of the camera/us to see.
    Love it, please keep doing more of this!

    • @UHFStation1
      @UHFStation1 3 года назад

      If we had as many green cones, but increased the number of red and blue cones to be equal in number would we see roughly the same even if it might be wasteful as far as useful information is concerned?

  • @panagiotistsiverdis
    @panagiotistsiverdis 5 лет назад +15

    I enjoy your videos exponentially more when you become all scientific and dive to the core of the subject! Amazing video! Nice Job!

  • @Rickyp0123
    @Rickyp0123 3 года назад +1

    I just found Tony’s nerd videos and I am a fan!
    As a quick aside related to the beginning where he said the brain assigns colors to different wavelengths of light, it is interesting to note how language is extremely influential in shaping what colors our brain has at its disposal to which to assign wavelengths. For example, words like “blue” “green” and “teal” dictate at what wavelength a color stops being blue and starts being greenish. Most interestingly, brown is not even a color: it is just dark orange; but since it is so common and we have a separate word for it, we consider it a color and can tell the “difference” between brown and orange even though this is not the case for any other color e.g. dark blue is still blue etc.

  • @robbedoeslegrand236
    @robbedoeslegrand236 5 лет назад +8

    5:43 Blue has the highest frequency in the visible spectrum. I think Tony made a slip of the tongue there

  • @johnsmith3856
    @johnsmith3856 5 лет назад

    I’m and EE and I love the nerdy parts of photography. Given your clout with the industry I’d love to see an interview with one of the technical staff of any camera company.
    It’s refreshing to take a deep dive every once in a while. I hope you keep doing these vids! Thanks Tony and Chelsea!

  • @BrianTheCameraGuy
    @BrianTheCameraGuy 5 лет назад +2

    Truly love these Nerdy videos. Love geeking out to this type of stuff. Really great to learn more about the inter workings of a camera. I had no idea how the image/light was captured. Thanks Tony.

  • @RedTick2
    @RedTick2 5 лет назад +1

    Another great detailed video Tony!

  • @nathanlucasphotography7781
    @nathanlucasphotography7781 5 лет назад +6

    Killer video! You do a damn good job at taking a complex subject and teaching it so everyone can understand. That’s a serious gift bud...Bravo and strong work! 👏🍻

  • @Soloskillz
    @Soloskillz 5 лет назад

    I have a Fuji camera. Thank you for explaining why there is color bleed into eyes and teeth when people are small in the frame and when ISO is high. Not an issue when sensor is adequately saturated for anyone interested in Fuji.

  • @AgnostosGnostos
    @AgnostosGnostos 5 лет назад +3

    Cameras actually see beyond the visible spectrum of light. Digital image sensors can capture infrared light and ultraviolet light and can be used with infrared and ultraviolet photography. Ultraviolet photography can be unethical though.
    Ultraviolet light can penetrate the clothes of people. Ultraviolet photography can be too revealing of what is behind the clothes.
    Of course there are summer clothes with UV protection but very few people wear them.
    Both infrared and ultraviolet light can deteriorate photos and are both blocked by a thin low pass filter above the digital image sensor of cameras. This filter isn't very difficult to be removed but there is always the danger of scratching the sensitive sensor below. So experiment with old cheap and used digital cameras from eBay.
    In the lesser known case of ultraviolet photography the removal of the low pass filter isn't the only problem. All common modern lenses have anti-UV coatings which restrict the amount of the ultraviolet spectrum of light that reaches the digital sensor. But some older lenses don't have anti-UV coating and are available in eBay. For example Novoflex, Steinhell munchen lens and other very cheap lenses mostly from the eastern block of the Soviet era. A special Ultraviolet filter (not the common UV filters which block the UV light) which isn't very very expensive is needed. It has a dark violet color. This filter blocks the visible and infrared spectrum of light except the ultraviolet one. Baader company produce such filters.
    Ultraviolet photography is more easy during midday summer days when the ultraviolet light is very intense. Many flowers are different with ultraviolet photography. Insects can see the ultraviolet light and flowers have evolved. For further info just google ultraviolet photography or check the RUclips video with the title: Hackaday Supercon - David Prutchi : DIY Ultraviolet Photography.
    In the case of infrared light there are affordable infrared filters for the lenses that block all the visible spectrum and the ultraviolet spectrum except the infrared spectrum which passes to the sensor. They have a very dark red color and demand long shutter speeds. Infrared in the past was easier with infrared films which still exist.
    The foliage looks much different with the infrared light. Like with ultraviolet photography the best time for shooting is during midday of summer days when the infrared radiation of sun is very strong.

    • @eye4invisible787
      @eye4invisible787 5 лет назад

      UV bandpass (absorption) and dichroic (interference) filters are generally very expensive (starting at around USD 200) especially when you have to stack a bandpass with an IR cut filter (such as a Schott S8612, which isn't cheap either) to allow only UV light through.
      Baader filters are designed for telescopes, but can be used on a dual-spectrum or full-spectrum converted camera, but since they are dichroic, the angle of incidence plays a big part in how a UV image is captured (they also oxidise faster than UV bandpass filters). The "dark violet" colour glass you mention is Woods glass (used in black lights) and that also leaks IR light, so has to be stacked. Baader dichroic filters do not need to be stacked, and are mirror-like in appearance.
      IR filters can be of many colours, depending on the transmittance and wavelength cut-off. Deep red filters (very dark but not completely opaque to the human eye) are around the 720nm mark, so leak a little bit of visible light, whereas an 850nm filter is completely black and opaque to the eye. Conversely, a 550nm filter lets in a lot of visible and infrared, and is a transparent orange colour. In the case of IR, shutter speeds are not that slow for a full-spectrum converted camera (especially at midday) and you can still shoot in the early morning and late afternoon without an issue (unlike UV).
      Also, UV-capable lenses (even the vintage ones on ebay) are becoming more expensive. Plus, IR and UV light focuses differently to visible light, so even older lenses without anti-UV coatings can have a focus shift issue, making sharp images difficult to obtain.

  • @WarrenWalksNYC
    @WarrenWalksNYC 5 лет назад

    As a retired physicist/nerd/geek and now into photography, may I suggest a new Nobel Prize for best teacher of the year - Tony Northrup.

  • @DPGrupa
    @DPGrupa 5 лет назад +1

    Btw, latest Pentax cameras can imitate AA filter by moving the sensor around a bit, so it's the best of both worlds

    • @DerHeimatlose1
      @DerHeimatlose1 5 лет назад

      Not only Pentax
      I don't remember which camera it was but I think there's also an Olympus with this feature

  • @mannyvidsnyc
    @mannyvidsnyc 5 лет назад +4

    Amazing, well done Tony 👍 really enjoy when you break down the science

  • @legacyfitness7469
    @legacyfitness7469 5 лет назад

    I dig this nerdy stuff deep, oceanic sima black hole crazy deep. Just can’t get enough. Keep them coming Tony

  • @coolmac11
    @coolmac11 5 лет назад +1

    These videos are great! Allows for a greater appreciation of complexity of the equipment we take for granted!

  • @spacemonkey200
    @spacemonkey200 2 года назад

    I knew Tony could explain this how I could understand it. Thanks Tony.

  • @AstroFarsography
    @AstroFarsography 4 года назад

    Mega interesting stuff. Always fun to learn and dig deep into this. Thanks Tony!

  • @Astrolavista
    @Astrolavista 5 лет назад

    RGBW makes a lot of sense to me and we already use a similar technique in the astronomy world. In astrophotography using mono cameras and filters, we use luminance (w) plus RGB.

  • @PSun2205
    @PSun2205 5 лет назад +3

    Was that over 21 minutes?? I didn't even realize. So, glad i watched it.

  • @Alsayid
    @Alsayid 5 лет назад

    I really liked this! I learned more about the way sensors capture color, and why they are laid out that way, than I have from anywhere else. Thanks!

  • @TERN666
    @TERN666 5 лет назад +4

    There were also triple-CCD videocameras with a prism. Each sensor captured a single RGB color. The color accuracy was excellent, but the technology was expensive and complicated so it never made its way to photography.
    As for now, manufacturers partly solved the bayer problem simply increasing the amount of pixels. Which kind of works - more cells, more data for approximation + you can get rid of aa filters simply because there are enough photo sites per square mm to forget about aliasing in most cases.

    • @kilohotel6750
      @kilohotel6750 5 лет назад

      Pentax used to use ccd sensors in their DSLRs

    • @TERN666
      @TERN666 5 лет назад

      @@kilohotel6750 All manufacturers used to have CCD's in their cameras (Canon, Nikon, Sony). They had the same Bayer filter array in front of them. I was talking about triple-chip + prism , where each chip received only one color and no filters were required at all.

  • @ivankiefer3886
    @ivankiefer3886 5 лет назад +2

    Great video tony. Is interesting to see what goes behind the beautiful pictures.

  • @nhancao4790
    @nhancao4790 5 лет назад +36

    Vsauce: this is not yellow
    Oh wait, wrong channel

  • @kdavis99
    @kdavis99 5 лет назад +7

    You guys produce excellent videos...thanks as always for all of your content!

  • @aphelps13
    @aphelps13 5 лет назад

    I love your videos. You have a knack for taking topics that i have a mild interest in, but not enough to spend hours researching, and condensing it into a digestible, hugely educational 20min video.

  • @kineticbe
    @kineticbe 5 лет назад +3

    I keep learning a lot from you both. Thanks for the good work!

  • @MiLaKreativ
    @MiLaKreativ 5 лет назад +1

    A fact that is relatively complex for non-physicists is explained well and simply. Congratulations.

  • @MrPsylocibine
    @MrPsylocibine 5 лет назад

    This is the best kind of videos of your channel, please keep geeky Tony

  • @MiguelRozsas
    @MiguelRozsas 4 года назад

    about the foveon sensor, just a minor adjustment: There is no filters on it, in the middle of sensor, as Tony draw. It is just a block of silicon, like any other sensor. What is different on a foveon sensor is that the electrical signal is taken from the top of sensor, from the middle and from bottom. Happens that the blue light photons are converted in electrons on the top of sensor ; the green photons are converted in electrons in the middle of sensor and the red photos are converted in the bottom of sensor. Taking the electrons from top, middle or bottom you know what kind of light has produced the electricity (the signal).

  • @davidmcclure6275
    @davidmcclure6275 5 лет назад

    Great job Tony! I’m an engineer and appreciate a deeper look into how things work. After watching your video, I can better understand how sensors of the future could have much better low light capability. Looks like there’s room for more innovation.

  • @dzllz
    @dzllz 5 лет назад +2

    I love your nerdy videos the most. Amazing! So well explained, keep it up.

  • @savvasmarkou8817
    @savvasmarkou8817 5 лет назад +1

    Excellent video Tone!

  • @GeraldBertramPhotography
    @GeraldBertramPhotography 5 лет назад +1

    Tony this was a fantastic video. You are a great educator and I for one would love to see more of these type of educational videos. I know they really don't help to pay the bills but they really are enjoyable to watch.

  • @MyHumanWreckage
    @MyHumanWreckage 5 лет назад

    As usual, excellent video but I can guarantee 99% of those who watch it will be confused. Having worked in graphics for 30 years I can honestly say you are spot on except we always put the longer wavelengths to the left and shorter to the right. Your chart was the opposite. The reason why we do this is to create less confusion by keeping a standard. RGB or ROY G BIV always makes sense because you are always going from longer to shorter wavelengths. A suggestion would be in a future video to explain the difference between primary and secondary colours and how they work.

  • @juanprietovideos490
    @juanprietovideos490 5 лет назад +6

    Great geek out! As a fellow science nerd, I appreciate the research you did to put this video together. Great job! It's funny how our entire world is really just in our head. We have no direct input from our senses. Holographic universe maybe? LOL

  • @davidnordstrom5162
    @davidnordstrom5162 5 лет назад +1

    Thanks for a great CLEAR exposition.

  • @thedondeluxe6941
    @thedondeluxe6941 5 лет назад

    Great video! Nice to have every sensor option summed up next to each other like that.

  • @cameradoctor205
    @cameradoctor205 5 лет назад

    Great video Tony ... I love when things are explained down to the full extreme level !

  • @victormultanen1981
    @victormultanen1981 5 лет назад +1

    it is really good explanation of frequencies of light!
    Thank you Tony!!!!

  • @degrootl
    @degrootl 5 лет назад +1

    Love this nerdy stuff! Keep more coming! I like you warning users that a video is nerdy so that those who have no interest in that content can opt to avoid it, while the nerds amongst us can dive in.

  • @hamiddorosti3588
    @hamiddorosti3588 5 лет назад +1

    Love what you do Tony

  • @SRHerriott
    @SRHerriott 5 лет назад

    Absolutely love this, you’re fascinating when you get nerdy.

  • @obednaturephotography29
    @obednaturephotography29 5 лет назад

    This is the video is what like the most. Keep combining science and PHOTOGRAPHY

  • @arijitghosh6378
    @arijitghosh6378 5 лет назад

    Watched it a second time, still got blown away. I would also love to see a video about how a raw image is actually created after the light hits the sensor, like what the different processing steps are and if its possible to alter the way the raw data is captured in camera through camera software etc. Look forward to it 😁.

  • @MiguelMartinez-hm9wk
    @MiguelMartinez-hm9wk 5 лет назад

    This was awesome. Probably some of my favorite content on youtube right now. Please do more

  • @beberdje
    @beberdje 5 лет назад +1

    I love these technical videos! Please keep doing them Tony!

  • @AliasJimWirth
    @AliasJimWirth 2 года назад

    This is reallly good. I love this stuff. Thanks so much for including this type of content in your mix. I enjoy it all.

  • @FalcoII
    @FalcoII 5 лет назад

    These nerdy videos are great. I knew some of the point but in this video it's nicely put together!

  • @rocksandoil2241
    @rocksandoil2241 5 лет назад

    Nice refresher from my old remote sensing days when we used satellite imagery and RGB detectors

  • @fmh357
    @fmh357 4 года назад

    I learned something I didn't know today about camera sensors. That photo-sites can be shared and pixels are interpreted by the camera's processor. Kind of like the Brain is what sees by interpretation and the eyes simply collect information.

  • @stuartschaffner9744
    @stuartschaffner9744 5 лет назад

    Oh so cool! Tony jumps right in where more timid individuals fear to tread. Anyone who has ever tried to explain this to a general audience knows how well Tony did here. My only quibble would be that there are three made-up colors: yellow, cyan, and magenta. I think that what most people say is purple is really magenta. Cyan is blue and green, or the relative absence of red. Many men have genetic difficulties in the blue versus green area.

  • @pedromfs
    @pedromfs 5 лет назад

    Wow I missed these videos and probably this is the best of the best. Thanks Geeky Tony

  • @Vamp898
    @Vamp898 3 года назад

    Because I'm pretty nerdy about colour, I started getting into SIGMA Cameras back in 2014/2015. The Foveon Quattro sensor is actually one of the biggest and most clever inventions ever created and people ask me to this day "How do you edit your photos that your colors come out that awesome" and I always tell them "I don't, it's my camera" which, in general, leads into dissatisfaction and even more here than people had before.
    People tend to really hate things they want, but can't have.
    Anyway, Quattro Sensor, awesome invention. It works because all layers in an Foveon Sesnor record all wavelengths of light (to different extends) which is not just the reason that those colours are that awesome but also makes possible to have different resolutions in the layers.
    With 29 physical megapixels, you get 60 megapixels of information that result in an (compared to an bayer sensor) 39 megapixel image.
    You get more pixels than you physically have on your sensor without interpolation
    Can it get more nerdy?

  • @radiozelaza
    @radiozelaza 5 лет назад

    Quad-Bayer array with yellow instead of green cells - and you get a yellow tint in the photos, like in the Huawei P30 Pro.

  • @jiwaplastik6520
    @jiwaplastik6520 5 лет назад +76

    God: how nerdy can you get???
    Literally nobody:
    Tony: Yes

  • @richardmorris7295
    @richardmorris7295 5 лет назад

    One of your best videos. This was excellent. Good work Tony

  • @camerasutra247
    @camerasutra247 5 лет назад +2

    Good video nicely delivered. Keep up the good work.

  • @christaylor8410
    @christaylor8410 5 лет назад

    Thanks Tony! Great, understandable explanation of a complex topic.

  • @Michael-OBrien
    @Michael-OBrien 5 лет назад

    Blue = highest frequency, not lowest.
    For those who’ve never really touched on this topic before, you did a very good job at describing the fundamentals. 👍

  • @andyyoon7853
    @andyyoon7853 5 лет назад

    It's heavy but very useful. Thanks, Tony!

  • @gersonmadrid6675
    @gersonmadrid6675 5 лет назад +1

    Mega good. Thanks Tony.

  • @wirechair
    @wirechair 4 года назад

    loved this!! always wondered how the sensor captures color information that corresponds to pixel rbg values on the saved image file. This gives pertinent insight into that process.

  • @certoglenn4840
    @certoglenn4840 5 лет назад

    Love it! Pointing friends to this video. Thanks, Tony.

  • @stefanstojoski4572
    @stefanstojoski4572 5 лет назад +1

    Useful. Such a good video

  • @ChrisProuse
    @ChrisProuse 5 лет назад

    Really enjoyed that Tony - well done! :)

  • @motherbrain2000
    @motherbrain2000 5 лет назад

    everyone has forgotten about another Fuji sensor innovation: The "Supper-CCD" sensor. It had a honeycomb photo-site structure and was highly regarded among professionals in the early 2000s.

  • @donstravelsandrants.
    @donstravelsandrants. 5 лет назад

    WOW, I love watching and listening to stuff like this. Your a definite nerd Tony.

  • @ricardozettl6713
    @ricardozettl6713 5 лет назад

    Once again: big support!
    That is just knowledge and you know how to present it so that I can understand it.
    With the X- Trans I believe there was a little error, but doesn't matter in the general message.

  • @keeganmiller442
    @keeganmiller442 5 лет назад +1

    Very useful stuff! Thanks Tony

  • @Eye-V.
    @Eye-V. 5 лет назад +1

    I love these type of videos. Thanks man 🤙

  • @bravisha
    @bravisha 3 года назад

    Have watched hundreds of your videos (u & Chelsea). First time commenting and liking (my bad). I realize I am a nerd as this video really got me hooked (not the only nerd qualification). Great job both of you. Thanks for sharing and growing the knowledge. Salute!

  • @sanderbass
    @sanderbass 5 лет назад

    This was just great ! Would love more of these kinds of videos!

  • @henkkaa88
    @henkkaa88 5 лет назад +1

    I remember these from elementary school and then again in high school. Good video, but when I read the comments I feel that people should pay attention in school.

  • @videojeroki
    @videojeroki 5 лет назад

    there is also the "3CCD camera" that separate the wavelengths with a prism and use 3 sensors without filters to capture the 3 RGB components. no demosaicing, pure color beauty.
    the drawbacks are the complexity of the prism adjustment and so the price of 3 image sensors, lenses are also specific to that kind of camera. it is not use for thr consumer market, but offers incredible IQ.
    I'm actually working right now on a camera simulation using "multi-spectral" ray tracing, it is a lot of fun ;)

  • @joelsd1
    @joelsd1 5 лет назад

    Tony I enjoy your “nerdy” videos and in this one, your goal to explain the processing of light is appreciated. However, I have a few comments.
    Light, which is a form of electromagnetic energy, can be described as a traveling wave that is periodic in both time and distance. Frequency is associated with a traveling wave’s periodicity in time and has the units of cycles per second (or Hertz). Furthermore, wavelength is associated with a traveling wave’s periodicity in distance and has the units of distance (typically, the metric system is used and so the units are meters, kilometers, nanometers, microns, etc.). Frequency and wavelength are inversely proportional where frequency multiplied by the wavelength equals the speed of light (the speed of light in a vacuum is typically used). So light waves of small wavelengths have high frequencies while light waves of large wavelengths will have the low frequencies.
    The response curves you drew labeled the x-axis in meters (distance) and therefore, this curve is a wavelength curve. Although you described the physics properly, you erroneously used the term frequency instead of wavelength. Frequency response curves have the x-axis in Hertz.
    In all digital processing, the analog signal (i.e., the light being captured by the camera) is sampled at a particular rate (or sampling rate). This analog signal has a particular frequency spectrum (that is, a representation of the energy of the signal as a function of its frequencies) and the distance in frequency from the minimum frequency to the maximum frequency of the analog signal’s spectrum is called the bandwidth. According to the Nyquist/Shannon Sampling theorem, to get perfect sampling (that is, to reconstruct the analog signal from the sampled signal without any errors) the sampling rate must be chosen as greater than twice the maximum frequency (Nyquist rate) of the analog signal. When the analog signal is sampled the frequency spectrum of the sampled signal consists of multiple copies of the analog signal’s frequency spectrum each separated by the sampling rate (or some erroneously call the sampling rate the sampling frequency). If the sampling rate is greater than the Nyquist rate, then these copies are separated in frequency far from each other. However, if the sampling rate is lower than the Nyquist rate then these copies overlap each other and the subsequent reconstructed signal will have errors (that is, will be distorted). This is called aliasing. In photography, the sampling rate is fixed by the number of effective pixels in the sensor. Since the resolution needing to capture all of the small details an image will always be greater than the number of pixels available in the sensor, aliasing to some degree will always occur. To eliminate aliasing, an anti-aliasing (or optic low pass) filter will effectively reduce the bandwidth of the analog signal so the Nyquist rate can be satisfied. However, reducing the bandwidth reduces the details of the original photo. So without an anti-aliasing filter, errors at the smaller details (high frequency/shorter wavelengths) will occur.
    Again I enjoy your videos and hope this is clear.

  • @KelbyThwaits
    @KelbyThwaits 5 лет назад +2

    It would be good to clear up a little misinformation regarding Frequency and Wavelength. Long Wavelength = Lower Frequency. Short Wavelength = Higher Frequency. You referred to “Short” and”Long” Frequencies which ends up being misleading (and having an opposing meaning) in the way it was presented here.
    Red = Lower frequency (longer wavelength) and Blue/Violet = Higher Frequency (shorter wavelengths). In the video you started off in the beginning stating Blue is a “Shorter Frequency” and Red is a “Longer Frequency” (which I presume you meant Wavelength) but then contradicted that around 5:48 saying “Blue is a Lower Frequency.”
    I like your videos a lot; I just don’t want people getting all confused over the super-awesome physics of light. Keep making great stuff!

  • @Tbonyandsteak
    @Tbonyandsteak Год назад

    Same with longexposures at night, colors gets highly saturated, while in harch lights they vanish into white. Like vintage with bad glass have "character" colors, while modern transparent glass are sharp but flat. Also noticed camera picking up colors you cant see with your eyes. Where I took a picture of a greay sea, where the sun shined, but was cloudy. But to the camera the sea was blue. So I tend to underexpose on flat light and over expose on dark scenes, since it gets good seperations of colors.

  • @elnax88
    @elnax88 3 года назад +3

    Why not CMYW sensors?¿
    That would give you 66% more light

  • @EnormousSmartass
    @EnormousSmartass 5 лет назад

    Awesome science lesson Tony! :) The sensor with yellow of the P30 Pro is interesting as well.
    For another video: the Google camera app now supports computational RAWs. These files are surprisingly impressive in post when loaded up on your PC.

  • @SydReinhardt
    @SydReinhardt 5 лет назад

    Great and informative video Tony, loved it and no matter how nerdy, I never got sleepy 'cos you made it so interesting. Well done and thank you. And pay no heed to the supernerds who complain when you say wavelength and mean frequency, or whether purple is a colour. You helped us understand a great deal more and I really appreciate it. Keep it up. (Missed Chelsea though!)