A more useful term than depth of field might be field of or depth of acceptable focus. The reason for this phenomenon is a concept we were taught in my photography course as "circles of confusion"; lensed light creates a plane of perfect focus, where parallel rays of light converge to a point. In front of and behind this plane is a "field" of infinitely many, infinitismally thin planes, where those rays are lensed not to a point but to a circle, called circles of confusion, growing larger on each plane further away from the focal plane. In images with deeper fields of acceptable focus, the circles of confusion created by the refracted light rays are small enough to be perceived as points for a greater distance. This is also why a smaller aperture creates a deeper field of acceptable focus; by selectively gating most of the non parallel rays of light, smaller circles of confusion are created throughout the volume of the image field, expanding the field of acceptable focus.
Hey Tony, I think you are missing something here: The circles of diffusion/confusion (i.e. the bokeh balls when almost in focus) can actually be calculated so depth of field is not as subjective as you say. I remember while studying physics, we actually calculated what arc (i.e. angle) an aperture (/lens system) could physically resolve and what the limits of being "in focus" where. Because even light itself does have its limits in that way. And then you still have to add the sensor dimensions... That depth of field calculator you showed does not seem to factor in sensor size, but you can actually calculate the exact physical properties when you know the size of the pixels in the sensor. You can then calculate the actual depth of field where the difference in focus cannot be resolved by the camera anymore (because all the circles of diffusion are smaller than a pixel on the sensor) and so everything appears equally sharp. I would guess that this calculator you showed just uses an old formula or estimation based on ~12MP sensors, which obviously fails for high MP sensors. But that doesn't make the concept untrue, just the estimation fails. I was rather astounded back in my uni days, when we could exactly calculate the depth of field for a lens and sensor and then measure it with one of those printed, triangular focus-checkers.
Depth of field is a gradation, it only measures falloff from the focal plane. Whether that falloff is shallow or deep is what is being described, you're not widening the effective focal range before that falloff is applied, it is always applied, you are affecting the falloff of the blur itself.
Spent the whole video waiting to hear the words “circles of confusion.” I was left hanging. Maybe do a follow-up to explain what depth of field actually means?
Depth of field only really makes sense if one also talks about circle of confusion. After defining the amount of blur is acceptable it's possible to calculate DOF. Quite often the circle of confusion is just set to large and one gets disappointed, especially with a sensor with small pixel size. Not a lie, just often over simplified.
For sure. I made a comment up there. Many factors wrong at this video but mainly, to be in focus is arbitrary and sharpness is always a diminishing function form the focus point. Including the auto focus is based on maximizing sharpness at the focus point not maximizing the DOF mean sharpness value
The circle of confusion is the way I learned it many decades ago from my dad's photo magazines. Starting at almost a point of perfect sharpness at the focused distance and growing as a cone in either direction. It was well defined that it was not perfect focus but what is acceptable sharpness for the film and final viewing. IE: if you are shooting 35mm and printing 8x10, how much of an inch can a point be shown and still be considered effectively in focus when viewed at X distance. Blow it up to a 8ft by 10 foot print and it looks out of focus at the same viewing distance but back up to 10 times the distance (which would be typical) and it looks equally in focus.
Wait! When did depth of field start meaning that there is an area in front and behind the focal plain where objects are going to be tack sharp? I think Tony has been confused all these years. It always used to be said that it was an area in front and behind the focal plane that was ACCEPTABLY sharp which is not only accurate it's logical also.
Yes, that's why you focus on the eye...its tack sharp, and the tip of the nose to the front of the is reasonably unless your really close and shooting at f1.2 where the tip of the eyelash might be joining the bokeh club.
I have to assume Tony is intentionally misunderstanding to demonstrate a point. Even semi-technical descriptions of depth of field always mention ACCEPTABLE or PERCEIVED sharpness. Rethinking signing up for the Art and Science of Photography series. Looks like the science may be unclear.
My texts, including Ansel Adams used the term "apparent" focus. There is only ONE focus plane. All the rest has circles of confusion. The effect is in the eye of the beholder.
He's not intentionally misunderstanding, he's intentionally representing a common misunderstanding amongst photographers (or at least inexperienced photographers).
As I understand it, DOF was once defined from an 8x10 print, held at a comfortable reading distance - what seems to be in focus in this situation. I wish you had mentioned this.
There's not many lenses that have a flat focal plane. Curvature of this focal plane as flattened out more with recent lens designs, however it's always there to some degree.
Would be a good idea to use a duplication or enlargement lens for this purpose, since they're designed to have a flat field. A lot of macro lenses qualify too.
I also noticed that what Tony calls the focal plane may not be, as he said, parallel to the sensor. But in wide-angle lenses like the Canon EF 15mm Fisheye, the focus plane seems to have the same degree as the distortion on the lens. The example with the tilt-shift was great, but I miss the Fisheye tests on the table.
Great stuff. Thanks for putting this out. I do have a question that I was hoping you could help answer. I do a lot of portrait photography and there are always shots where one part of the model is out of focus or blurry and I can't figure out if it has to do with depth of field or my focus settings. For example, I have a shot where the model has her arms outstretched in front of her with her palm to the camera, signaling a stop motion✋️. When I focus on her face, her palm becomes blurry but her face is sharp. When I focus on her palm, then her face becomes blurry but her palm is sharp. If I want to get the model's face and palm both in focus in this case, will a higher f-stop do the job or if thus a focus issue? Any insight you could share on this would be greatly appreciated.
Depth of field has always been related to certain printing size seen at a certain distance. Viewing digital picture at 100% magnification will of course enhance lack of focus. So btw ALL dof calculator which not take in count pixel size can't give an accurate dof measurement.
I remember when I was learning DOF, I kept seeing the phrase “acceptably sharp”, and I kept mentally tripping over that phrase….acceptable to whom? I kept thinking. I wish someone had mentioned this then! It would not have been so mysterious to me. 😂
Display’s are becoming so good at showing detail and lack there of. I noticed a difference with the side by side comparison before he even had to zoom in. Viewing on a 2021 iPad Pro.
I have never expected any part of an image outside the focal plane to be as sharp as anything on the focal plane. There's always going to be a fall off. I don't need the sharpness to be evenly fantastic throughout the entire depth of field either. One of the keys I always thought was viewing distance. Pixel peeping is the opposite of giving an image that proper distance. Prints are still prints and may look great in real life even without perfect sharpness at 200% as seen on a computer. I guess I misunderstood depth of field all these years as I never thought it meant perfect sharpness throughout the area. More like acceptable.
You didn't misunderstand it, this video creates confusion where there isn't any. Depth of field had always been taught (correctly) that depth of field is the apparent area of acceptable sharpness and that there will only ever be one plane in perfect focus.
@@jeffreywrightphotography Not only has it always been defined as the region of the image which is acceptably in focus, but there is even a definition as to what acceptably sharp means. I typed it in a separate message.
As you mentioned, the tilt feature is useful for product photography, but also useful for any close macro photography. Even if you're photographing an object flat-on, if the object's reflective, the reflection of the camera can be annoying. The shift feature solves that. The shift feature is also super-useful for real-estate photos where it's hard to get a square angle because of tight spaces, or you want the windows and walls to look square when shooting a vertical space. A room with a high ceiling can look very dramatic. You can also prevent the camera from appearing in reflections on mirrors or windows. A friend of mine loves his tilt-shift lens for travel; he uses it more than any other lens, to the point of wearing the paint off. He mostly uses the shift feature, and creates some really unique "impossible looking" architectural photos. What I'm saying, is tilt-shift lenses are underrated. (Although if you only want to use the shift feature, it's effectively just an in-camera crop. You could also just use a super-wide lens and crop in post.)
Crop in post isn't the same. You lose many, many pixels and stretch the ones that are left. Usually the top of the building, too. Been there, done that. Plus, it's the ability to compose in the viewfinder. Love my new Samyang 24mm tilt/shift on my Sony a7Rii.
I'm sorry Tony, but this should be obvious that you have a "block" of focus with a center peak. Not an in and out border. The tilt shift lense demo was amazing. That's freaking awesome. Keep up the good work.
For a given distance setting, there is one plane of shapness - and a zone of acceptable blur. The latter is called depth of field. Depth of field depends on three factors: magnification, relative aperture and enlargement. Magnification is calculated using the actual focal length and focusing distance. Lenses with "internal focusing" don't actually change focusing distance, but instead focal length. This has considerable effect on magnification. The relative aperture is therefore also likely to change, and camera manufacturers may or may not take that into account when they program their built-in depth of field calculators. The third factor, enlargement, is rarely taken into account. Those age old formulas are calculated for a viewing distance of 3-4 times the diagonal of the final image. A 10x15 cm or 4x6" copy would normally be viewed at a distance of around 20 cm or 8". Today, images are generally viewed on computer monitors with a diagonal of 14-27" at a distance of 10-30" (25-75 cm). Try moving back from your monitor to 3-4 times its diagonal. Things will look less blurry (depending on eyesight, of course).
DOF and "acceptably in focus" make assumptions about viewing distance and the size of the image being viewed. That is why things on a small monitor (e.g. the camera LCD), tend to look in focus even though they do not when viewed on a large, high res monitor. DOF calculators have simply chosen not to include viewing distance and image size in their calculations. They should offer that option.
I chuckled at the way Tony’s “dad” seemed to nod or shook his head at each point Tony made. As for DoF, focus stacking is really the only solution for most of us. 👍
A filter thread mounted "TWO FIELD LENS" can get both the near and distant objects in focus. I got mine for $1 (used). Another solution is to get a crop sensor camera to get deeper depth of field.
If you really want to understand this, look at the concept of circles of confusion. Then you can see that the resolution of your sensor also affects depth of field. The depth of field was never defined as the distances fully in focus, just "acceptably sharp."
Glad to see you say this. This became obvious when even 36 mega-pixel sensors came out. Zeiss has excellent white papers on their website that explain depth of field fully and correctly. The best thing I learned is the concept of the "circle of confusion", which is the size of an area on your film or digital sensor that is the smallest it can resolve.
yes, it is subjective to a point. And other commenters already pointed to, that the subjectivity is related to one parameter in the DoF formula "Circle of Confusion". In the old days, we use an approximation number for the CoC that we would relate to film format. I don't precisely remember, but it may have been based on 100 LinePairs/millimeter resolution at 35m film format and an 8" x 10" print at 6x diagonal (6 * 12.8" = 76.8") viewing distance. We used a different CoC number for medium format. At 8" x 10", and 8x magnifying loupe on the focusing glass was easier than the formula, we would be focusing, tilting, shifting, swinging, focusing a lot for one shot anyhow. And when in doubt, a Polaroid test shot with a 4" x 5" adapter in the 8" x 10" camera. In digital, we have ever more resolving sensors, albeit the 100lp/mm in 1975 for color film and very good lenses already was very good. And note, film had grain but did not need wild-assed color guessing for missing colors that today we need with Bayer filtered sensors (that are analog, panchromatic and colorblind to begin with). So the comparison with film is hard to make, for one because in digital we have raw processing (color guessing) where anti-aliasing may have been facilitated in hardware with a second filter-layer over the sensor, and may again be done in post by raw processing too. As we pixel peep, we ought to adjust the CoC in the DoF parameter according to today's use of images. As most raw shots end up in the interwebs at, say, 1,280 resolution in JPEG we might want to take that into account. The subjective part of the CoC aside, it clearly relates to the resolution of our lenses and sensors, and the quality of our raw processing. So there are a lot of variables involved here. Then there is the problem that focal length of a lens is defined at infinity. If we focus closer by we shift the (entire) lens away from the sensor and the mathematical or geometrical or laws-of-physics focal length gets longer. Consequently the image angle gets smaller. Consequently the F-number on the diaphragm control ring is not correct anymore. (Turn this around and you can see that a "prime" lens that has no focus breathing, is actually a zoomlens that compensates the focusing effect on focal length in terms of image angle. Potentially in that process, the T-stop for the F-stop we had set, remains the same.) Altogether, the problem with DoF is the DoF calculator app that does not take all these parameters and variables into account properly. The app may ask you for your camera make and model as well as lens make and model, ignoring view size, firmware version and raw processing. Having realized all that, entered the data to be given the "DoF" by the app, no change the camera for another model with significantly higher or lower resolution. The app I tested, gave me the same distances. Uninstalled.
Idk if this will be addressed, but the way I was taught about depth of field was that the in focus range was the range of general acceptable sharpness/ focus. Which means "they're more like guidelines anyway." Different purple have different standards of acceptability, but for the average person it's a guideline of the range in which there's a decent chance you'll find the focus to be "close enough."
In a way the argument is mostly about semantics - hence the clickbait-y title. The least confusing way of putting it would be something like this: technically, in any photo, there's only one spot that is in perfect focus, but depth of field is the concept of using the appropriate settings (focusing point and aperture) in order to set the desired distance in which things will look acceptably sharp. So yeah, that last bit is obviously subjective, but then, everything is; even right on the focusing plane, different people consider different sharpness levels acceptable, hence the fact that people read lens reviews. It's been known since the days of film that depth-of-field scales that you can find on lenses are excessively optimistic, to the point that on full frame, you had to substitute f/8 for 2.8 that you would read on the scale, or f/16 instead of 11. The "focus on one third of the distance where you want things to look sharp" tip tends to work better than focusing on the middle of the distance because things in front of the focusing plane look a bit more blurry than those behind it. Having said all this, I too frequently find out that I used too shallow a depth of field, because it looked fine at the time, on my tiny screen.
This is one of those things that is common knowledge, or heresy depending on the person you ask. I've had people tell me the focus is perfect across the whole depth of field. These are often the people who explain they don't use the camera's meter, but instead shoot in manual, adjusting the exposure until the little scale in the viewfinder says 0. Manual is the only way to be in control of the camera they explain. OH I say backing away slowly in case the disease is contagious...
@@DeputyNordburg lol yes, good example. Loads of tutorials from pro photographers asserting that either you use manual exposure as you described and are a serious photographer, or you are using automation and have less creative control. So apparently when I'm into aperture priority on my mirrorless camera and it gives me a more or less ok baseline exposure to which I can apply exposure compensation depending on what I want - especially since there's a pretty much perfect exposure preview in front of my eye - I am doing it wrong and lack creative control. Plus, it doesn't take five seconds to be ready to shoot any picture when I need to shoot instantly. Back to useless semantics, there's a popular youtube video called "Lens Compression Doesn't Exist" where a pro photographer explains that tele lenses don't compress, because you can shoot a wide angle lens, then crop the photo to the same frame as the tele and it will look the same, while completely ignoring that the point is how different lenses alter linear perspective of any subject. Apparently your lens choice depends entirely on how much you like walking.
It's all about the acceptable size of your circles of confusion. Look it up on wikipedia. Basically you need to define what's acceptable sharpness and that can vary a lot. Tony's pixel-peaking destroys the concept of DoF.
Exactly. In-focus is when the circle of confusion is smaller than a pixel. The higher resolution of your sensor, the shallower the possible depth-of-field.
The depth of field markings on lenses are made for 35 mm / full frame. If you use the lens on a crop sensor the depth of field is LESS and you need to use about one f-stop smaller aperture marking. Yes, it is less as the magnification from the sensor to the image is greater. In general the standard depth of field calculations are based on CoC 1/1200 of the sensor width. That is just resolution less than one megapixel.
@@gregsullivan7408 Yes, but even with the same size what I said applies. Teh depth of field markings have an interesting history. A Finnish photographer named Vilho Setälä made them for his Leica lens in the 1920s. Then his camera went to the factory for repair and Leica copied the idea. When he complained he got a free lens as compensation.
Depth of field is a point where things are acceptably in focus but not actually in SHARP focus. The zone changes depending on f-stop used and lens used and how close or far away you are to the camera but there is only one point where you get super sharp focus no matter what. I never figured any different so I was taught correctly.
Fuji bodies have a depth of field indicator on the focus distance scale. Users can choose the standard by which the camera determines what is considered within the depth of field. One of the choices is “pixel level”, which is a more conservative setting that works well to capture images that have acceptable focus within this range. I wish all bodies offered similar functionality. Canon has a set of relatively new T/S lenses in its EF lineup. (Much newer than the one you used.) It is rumored to have AF-capable T/S RF-mount lenses on its internal road map.
Cropped sensor helps also. Shot ducks taking off from a small pond with a 5D3 and the 400mm prime. Some ducks not in focus. About the same shot with a 70D and all in focus. You have explained that before, but didn't hear you mention it here. Keep up the great work.
Technology has changed the values of viewing distance of print/screen and image size, but has not changes the circle of confusion. So that begs the question for the DOF calculator you used, does it allow setting a value for circle of confusion ant if so what is its setting. But point taken there is only one plane of focus.
What about focus peeking in the camera and showing it that way and stacking focus? I feel a lot of photogrpaher are stacking now especially in product, landscape, and architect.
I have long noticed it already that dof is a 2d line not even in photography but in video where distance can matter less but i was still paying attention on gap distance. Which means i make more sure of distance in photography.
When I first started reading photography books on the subject (or maybe just book, only 1 :) ) depth of field was used to describe levels of "acceptable" sharpness for subjects that were not directly focussed on. So whatever isn't "in focus", that being elements not focussed on by the camera, can still be "acceptably" sharp. At least, this is how I've learnt (and still learning) it. Nice.
Great, clear, easy-to-understand explanation of the concept of depth of field. I think a tilt/shift lens could be quite useful for photographing small animals at very close distances, such as snakes, lizards, and frogs.
We are running into a new problem with high megapixel count cameras. Maybe a m4/3 is a good compromise ? ;) And why there are no tilt/shift cameras instead of tilt/shift lenses? Now it's very easy to tilt and shift a sensor. I had small tilt option in Pentax K-7, but they didn't make a game changer out of this option and didn't give more use than with O-GPS in later models.
The compromise is not really real. You get the benefit by sacrificing the maximum sharpness. Nobody forces to make huge prints of to pixel beep on high megapixel image.
To your point about using a tilt-shift-you can tilt the focal plane to gain depth of field, but you radically loose it in other parts of the image. So yeah, awesome for product photography, but it's probably not going to help much for landscape shooters. Also, Canon (and I believe Nikon) still make 'new' tilt-shift lenses. I just bought the relatively newly released 50mm TS-E last year. Looking forward to when they start making tilt-shifts for mirrorless. Would be nice to see Fuji, Sony, etc. make competitive perspective control lenses...
Love it! I have always focused on what is most important and the largest area within the frame on landscapes. Where it gets really exciting is the focal length multiplies everything for better or worse thus often picking a compromise or focus stacking on those telephoto shots. Then you have larger sensors which seemingly amplify you either well executed or not so well executed work
Tony, as always, great video. Could you tell me why, when I put my full frame lens on my crop sensor camera, aside from crop factor, things just dont look right. Thanks
With the T/S lenses you could have gone down a whole rabbit hole of focal area shapes - something Ted Forbes sometimes examines in his lens reviews. Many older lenses didn't have exactly a focal plane so much as a sorta focal spherical section, and that sometimes made things easier, other times harder. The modern emphasis on absolute corner to corner sharpness, thanks to the incredible resolving capabilities of modern sensors, has made modern lenses far more planar than formerly, which does (at least in my own experience) really make that "focal plane" more of a true shallow plane in practice. In a way, with older glass you could take a different approach and sorta get away with more.
Good explanation. It's not a lie per say, but just an oversimplification. I do a lot of astrophotography and it's a similar concept to the 500 rule. What worked for old low-megapixel sensors doesn't work now with 40 or 50 megapixel sensors, and people are getting pickier with how much trailing is acceptable. If you're making an instagram post, you could probably use a 700 rule but if you're printing for an art gallery, maybe you need a 300 rule. It all depends what your tolerance is and accepting the limitations.
Tony, I think you may have goofed in interpreting Nikon's definition of DOF. In the actual experiment you did, you started off with the little doll in the CENTER of the plane of focus, and then moved the larger doll 3 inches back. I believe that means that you pushed the larger doll twice the distance from the center of the field to the back edge of the field. Yes, it is true that this is a gradient, and not a blur-free field, but you might have tipped the scales quite by misinterpreting the 3 inch distance as what you needed to move the doll back from the center of the field, versus being the distance between the front and the back of the field.
I also thought he might have goofed on the three inches deep instead of however much back from the plane of focus. I wonder if his calculator took into account the resolution of his camera sensor too because the difference in sharpness between the two objects would be much more noticeable on a sensor with higher pixel density.
There's a little camera company that offers built in Image Stacking. I own the smallest camera with it, the Olympus Tough TG6, which uses "in camera stacking" for Macro Shooting. Works like magic: Focus, zoom in if you want, hit the button, it takes 8 pictures and after 2 Seconds you get the finished jpeg picture, crisp sharp in the whole artificial depth of field. right out of the camera. With Nice bokeh in the background.
Yes!!! “In focus” nothing that isn’t on the focal plane (which is infinitely thin) is in focus. DOF refers to the area that will “appear” to be in focus at a given reproduction ratio at a given viewing distance. This is particularly important with digital photography where you don’t have to have a dark room to crop a photograph. Something that the photographer’s vision required to be sharp is suddenly out of focus because the photograph was cropped. I have a Linhof camera that has multiple DOF tables for different enlargement sizes.
I don't get it why using the same lens that' focused to the same distance and at the same aperture, I get more DOF on a full frame camera than with an APS-C camera according to DOF calculators?
There are indeed curved sensors, at least Leica uses one. The big advantage is not on the DOF nor in the focus control, but in the light projection angle (the photosites receive more light when it comes perpendicular to the sensor surface).
You do not know how to apply DOF. go learn what Circle of Confusion is and that it takes into account viewing distance and image size. Simply put, you don't know what you're talking about... frequently. P.s. and the phrase has always been "acceptably in focus".
Do smaller sensors have a depth of field advantage? I often end up shooting with something like 200mm f20 (aps-c). I was wondering if the same shot with full frame would be better at all (300mm f30) or would the diffraction make the image softer.
Tony should also discuss the affect of lens focal length on the depth of field. The affect of reducing the focal length of the lens is greater than increasing the F-stop.
DOF is a gradient within an area that is considered acceptably sharp. Of course whether it's acceptable or not depends on how far you magnify the shot.
Ugh this is killing me. DoF always requires how the output is viewed. I believe all the common calculators assume a normal sized photo held at ordinary viewing distances (so not close up with a magnifying glass or a massive 3 foot poster up close).
Approximately an 8 x 10 inch print approximately 18 inch viewing distance is what the default normally are in a depth of field calculator. If you need a different size print or viewing distance you are expected to change your circle of confusion.
Next Up: A discussion of hyperfocal distance. Really, it would be excellent to discuss "apparent" focus and "acceptable" sharpness, and the formulas for determining these at different display sizes. And hyperfocal, too, if you want.
I never knew how strong the effect of tilt shift lenses is on the dof... neat and thanks for showing the viewfinder! Also one thing, lower megapixel cameras like the R6 vs the R5 will simply have less ability to show detail and especially on the DOF area this can hide the "out of focus ness" a bit (on the R6) while the R5 yea... you see it - same goes for subject shake for example. Its not a "solve all" just something i notices when i switched from my 6Dmkii to my new R6 (26->20mp) My pictures feel... how would i put it... less sharp but more overall in focus, if that makes sense? My eyes are really acustomed to how a 26mp picture looks in 1:1 on my pc :D
I’ve always thought a “plane” was the wrong way to think of how lenses focus. Due to the optics of your lens, shouldn’t it take on a more spherical shape? Also, your lens focuses on a gradient (in either direction). Your lens focuses on whatever you select and there’s a thin section that stays purely in focus. The F-stop simply controls how fast the focus falls off and becomes blurry. Bigger or smaller gradient.
Hi from France, Tony ! (so excuse my bad english ...) I believe in depth fields calculus and I think the mistake comes that your app use a pixel size larger than your camera. I did the calculation : H = 7' 10", with N = 8 and f = 24 mm imply 28 micrometers pixels while your camera's are under ten if it's a 40 Mp sensor.
No, all the apps use 0.03 mm for full frame. That is the problem as it represents only around 1 megapixel resolution. The problem is that modern cameras and lenses are so sharp and we have better ways to view images that the depth of field really does not hold. That is why you should focus on the most important part of the image, not for example on some hyperfocal distance.
Nothing wrong with the concept of depth of field, need to make sure you are choosing the correct circle of confusion, in the calculations. Can be issues with field curvature as well that will foil the mathematical perfection.
Other factors that increase depth of field are 1)distance from camera. If your subject is far a way, like a landscape, it will all be in focus, even at large apertures. I find the dof calculators seem to get this wrong with the relationship between large aperture and distance from the camera. 2) wide angle / short lenses seem to increase depth of field 3) smaller sensors. I've had consumer "super zoom" cameras that could focus on an object so close it was touching the lens and had good DOF at macro distance, no stitching required - it was because the sensor was miniscule. Smaller than apsc and 4/3rds but with the corresponding tradeoffs you would expect from a tiny sensor. Inferior tonal and color ranges and just forget bokalicious backgrounds.
You could add that there a resolution limits and magnification limits for those depth of field calculations. For example, at what resolution would it have been accurate?
If you've ever seen lazers work and converge, and focal points of lazers, you understand they all come to a *single* point. This makes total sense now.
Now i understand. Depth of field is acceptable focus. There will always be a degradation or fall off in sharpness from the focus plane. Thanks for the lesson from video and comments.
Great video Tony Never knew anything about Tilt-Shift lenses before , are they any good for macro of insects etc. when the subject is on a diagonal ? Are there any with longer focal length and high magnification ? Cheers Noel
The canon 90mm tilt shift is a 0.5x macro. When you tilt a lens the plane of focus tilts as well. So if everything you want in focus in one plane but the plane is not parallel to the sensor a tilt lens is perfect. I would recommend renting one and testing it. You can also get a tilt shift converter.
The chameleon has overall perfect sharp vision, because it acts like a tilt-shift lens. Nikon would be #1 again, if it offered cheaper kit lenses with tilt-shift options.
Last part i find it interesting (old school). DoF is more like to have main subject in focus and some of background less blurry to destinguish if in background is a mountain or a blooming branch (landscape). This need practice to see exactly how it works and then can do creative photos
Or your calculator might be wrong. Aren't there DOF guide marks on the lens any more? Or can you decide for your personal criteria, to just use half the DOF range that the lens or calculator shows?
Those guide marks are based on the same algorithms as the calculator uses. There is nothing wrong with the calculator. The problem is with the concept.
This video seems overly clickbaity. I thought it was common knowledge that Depth of Field was just a region of _reasonably_ sharp focus, at least among anyone who has actually heard of DoF.
Most have heard an incorrect definition of DoF and I have seen incorrect definitions in books going back to the 80's. So knowing how it should be properly defined is important as it may influence whether you do something like focus stacking, etc.
In my interactions with photographers more amateur than myself, this has rarely been the case, and many of them don't really seem to know exactly what "in focus" actually is (e.g. they see any photo that looks better in general than one taken with a phone, and assume it's in focus, because they just aren't used to having to zoom in and check), so I think this is a very useful video for a ton of people.
Had a friend who made small jeweled boxes with granulation and cabochons, about the same time I had a Pentax 35mm camera. To get into juried shows, she needed pix of these boxes that were razor sharp everywhere, front to back, when the frame was completely filled, showing two sides and top of the box. I used the Pentax 50mm compact macro lens, which was cheap. With that basic camera/lens and film, we got ASTOUNDING front to back FOV sharpness in aperture priority, Wide open, only the corner of the box was in focus. As you stopped it down, more of the background got sharp, profoundly so, even in the viewfinder. Period. I've NEVER been able to get anywhere close to that cheap Pentax setup with digital photography. So I agree with you that depth of field with digital photography seems to be a lie, compared to how it was with even my cheapo Pentax film camera. But why?
Anything close to Focal place will be "Acceptably" in focus with higher fstop number (f8, f11). According to what I know the focus is like a gradient in front and behind the focal plane no matter how big your Fstop number is, so eventually the subject even if its a little behind or infront of focal plane will never be in perfect focus!
I find it interesting that Tony’s ends this demonstration with a tilt/shift lens. View camera users recognize that to gain “maximum “ depth of field you have to adjust the focal plane - not the subject - enter the scheimpflug principal. Yea that’s old school and analoguesish at best but with the flat plane of focus you have in today’s digital cameras you are very limited in how much you can actually “increase “ your depth of field
Canon is making autofocus RF tilt shift lenses. I think they will be game changers, especially if they work well enough to be functional portrait lenses with eye af
Canon are still making tilt-shift lenses. They refreshed the line in 2017 and the TS-E 135mm f/.4L Macro has been _THE_ product lens ever since. Recent patents and somewhat-substantiated rumours heavily suggest they're gearing up to release the first-ever autofocus tilt-shift lenses. Word is Fuji will be getting some out for their GFX system soon as well. Shift adapters seem to have become quite big business for several third-parties, too. Tilt-shift never went away and isn't "old-school analogue", it's just rarely touched by RUclipsrs, probably for the same reasons lots of channels ignore all-manual brands like Zeiss and Voigtlander. Doesn't mean they're not there.
A more useful term than depth of field might be field of or depth of acceptable focus. The reason for this phenomenon is a concept we were taught in my photography course as "circles of confusion"; lensed light creates a plane of perfect focus, where parallel rays of light converge to a point. In front of and behind this plane is a "field" of infinitely many, infinitismally thin planes, where those rays are lensed not to a point but to a circle, called circles of confusion, growing larger on each plane further away from the focal plane. In images with deeper fields of acceptable focus, the circles of confusion created by the refracted light rays are small enough to be perceived as points for a greater distance.
This is also why a smaller aperture creates a deeper field of acceptable focus; by selectively gating most of the non parallel rays of light, smaller circles of confusion are created throughout the volume of the image field, expanding the field of acceptable focus.
Yes! I expected Tony to discuss circles of confusion in the video. Good explanation here.
Hey Tony, I think you are missing something here:
The circles of diffusion/confusion (i.e. the bokeh balls when almost in focus) can actually be calculated so depth of field is not as subjective as you say. I remember while studying physics, we actually calculated what arc (i.e. angle) an aperture (/lens system) could physically resolve and what the limits of being "in focus" where. Because even light itself does have its limits in that way. And then you still have to add the sensor dimensions...
That depth of field calculator you showed does not seem to factor in sensor size, but you can actually calculate the exact physical properties when you know the size of the pixels in the sensor. You can then calculate the actual depth of field where the difference in focus cannot be resolved by the camera anymore (because all the circles of diffusion are smaller than a pixel on the sensor) and so everything appears equally sharp.
I would guess that this calculator you showed just uses an old formula or estimation based on ~12MP sensors, which obviously fails for high MP sensors. But that doesn't make the concept untrue, just the estimation fails.
I was rather astounded back in my uni days, when we could exactly calculate the depth of field for a lens and sensor and then measure it with one of those printed, triangular focus-checkers.
Depth of field is a gradation, it only measures falloff from the focal plane. Whether that falloff is shallow or deep is what is being described, you're not widening the effective focal range before that falloff is applied, it is always applied, you are affecting the falloff of the blur itself.
This is a good and helpful answer. Thank you
Spent the whole video waiting to hear the words “circles of confusion.” I was left hanging. Maybe do a follow-up to explain what depth of field actually means?
Depth of field only really makes sense if one also talks about circle of confusion. After defining the amount of blur is acceptable it's possible to calculate DOF. Quite often the circle of confusion is just set to large and one gets disappointed, especially with a sensor with small pixel size.
Not a lie, just often over simplified.
For sure. I made a comment up there. Many factors wrong at this video but mainly, to be in focus is arbitrary and sharpness is always a diminishing function form the focus point. Including the auto focus is based on maximizing sharpness at the focus point not maximizing the DOF mean sharpness value
The circle of confusion is the way I learned it many decades ago from my dad's photo magazines. Starting at almost a point of perfect sharpness at the focused distance and growing as a cone in either direction. It was well defined that it was not perfect focus but what is acceptable sharpness for the film and final viewing. IE: if you are shooting 35mm and printing 8x10, how much of an inch can a point be shown and still be considered effectively in focus when viewed at X distance. Blow it up to a 8ft by 10 foot print and it looks out of focus at the same viewing distance but back up to 10 times the distance (which would be typical) and it looks equally in focus.
Seems to me that Tony did not understand the depth of field concept. He won't sell many books that way.
We really work VERY hard at refusing to call anything or anyone a lie/liar.
A 2 dimensional focal plane has no depth. Its a lie. The amount of blur acceptable in the focal plane is zero.
It's amazing how many "experts" comment on your videos.
Finally, Tony acknowledged that he has been barely in focus on some of his f1.2 videos. Can't blame hime though. Everyone likes a bit of bokeh.
You mean Toney, right?
@@donaldobrien9171 😅😅
bokeh is the new vaseline on the lens :P
Bokeh is a quality, not an amount; you can't have "a bit of" it. That's like describing saturation as "a bit of hue".
@@sebastianmatthews1663 what if they were trying to have "a bit" of a laugh, is that okay, captain?
Wait! When did depth of field start meaning that there is an area in front and behind the focal plain where objects are going to be tack sharp? I think Tony has been confused all these years. It always used to be said that it was an area in front and behind the focal plane that was ACCEPTABLY sharp which is not only accurate it's logical also.
Yes, that's why you focus on the eye...its tack sharp, and the tip of the nose to the front of the is reasonably unless your really close and shooting at f1.2 where the tip of the eyelash might be joining the bokeh club.
THIS
I have to assume Tony is intentionally misunderstanding to demonstrate a point. Even semi-technical descriptions of depth of field always mention ACCEPTABLE or PERCEIVED sharpness. Rethinking signing up for the Art and Science of Photography series. Looks like the science may be unclear.
My texts, including Ansel Adams used the term "apparent" focus. There is only ONE focus plane. All the rest has circles of confusion. The effect is in the eye of the beholder.
He's not intentionally misunderstanding, he's intentionally representing a common misunderstanding amongst photographers (or at least inexperienced photographers).
As I understand it, DOF was once defined from an 8x10 print, held at a comfortable reading distance - what seems to be in focus in this situation. I wish you had mentioned this.
Correct. These days maybe it should be a comptuer monitor of some particular size at 24 inches or something similar. :)
There's not many lenses that have a flat focal plane. Curvature of this focal plane as flattened out more with recent lens designs, however it's always there to some degree.
Would be a good idea to use a duplication or enlargement lens for this purpose, since they're designed to have a flat field. A lot of macro lenses qualify too.
I also noticed that what Tony calls the focal plane may not be, as he said, parallel to the sensor. But in wide-angle lenses like the Canon EF 15mm Fisheye, the focus plane seems to have the same degree as the distortion on the lens. The example with the tilt-shift was great, but I miss the Fisheye tests on the table.
Great stuff. Thanks for putting this out. I do have a question that I was hoping you could help answer. I do a lot of portrait photography and there are always shots where one part of the model is out of focus or blurry and I can't figure out if it has to do with depth of field or my focus settings. For example, I have a shot where the model has her arms outstretched in front of her with her palm to the camera, signaling a stop motion✋️. When I focus on her face, her palm becomes blurry but her face is sharp. When I focus on her palm, then her face becomes blurry but her palm is sharp. If I want to get the model's face and palm both in focus in this case, will a higher f-stop do the job or if thus a focus issue? Any insight you could share on this would be greatly appreciated.
Depth of field has always been related to certain printing size seen at a certain distance.
Viewing digital picture at 100% magnification will of course enhance lack of focus.
So btw ALL dof calculator which not take in count pixel size can't give an accurate dof measurement.
I remember when I was learning DOF, I kept seeing the phrase “acceptably sharp”, and I kept mentally tripping over that phrase….acceptable to whom? I kept thinking. I wish someone had mentioned this then! It would not have been so mysterious to me. 😂
Display’s are becoming so good at showing detail and lack there of. I noticed a difference with the side by side comparison before he even had to zoom in. Viewing on a 2021 iPad Pro.
good video, i learnt alot. which recently is quite rare from camera channels.
I have never expected any part of an image outside the focal plane to be as sharp as anything on the focal plane. There's always going to be a fall off. I don't need the sharpness to be evenly fantastic throughout the entire depth of field either. One of the keys I always thought was viewing distance. Pixel peeping is the opposite of giving an image that proper distance. Prints are still prints and may look great in real life even without perfect sharpness at 200% as seen on a computer.
I guess I misunderstood depth of field all these years as I never thought it meant perfect sharpness throughout the area. More like acceptable.
You didn't misunderstand it, this video creates confusion where there isn't any. Depth of field had always been taught (correctly) that depth of field is the apparent area of acceptable sharpness and that there will only ever be one plane in perfect focus.
@@jeffreywrightphotography Not only has it always been defined as the region of the image which is acceptably in focus, but there is even a definition as to what acceptably sharp means. I typed it in a separate message.
Your understanding is correct Tony's is is wrong. He does not understand the concept of depth of field.
As you mentioned, the tilt feature is useful for product photography, but also useful for any close macro photography. Even if you're photographing an object flat-on, if the object's reflective, the reflection of the camera can be annoying. The shift feature solves that. The shift feature is also super-useful for real-estate photos where it's hard to get a square angle because of tight spaces, or you want the windows and walls to look square when shooting a vertical space. A room with a high ceiling can look very dramatic. You can also prevent the camera from appearing in reflections on mirrors or windows.
A friend of mine loves his tilt-shift lens for travel; he uses it more than any other lens, to the point of wearing the paint off. He mostly uses the shift feature, and creates some really unique "impossible looking" architectural photos.
What I'm saying, is tilt-shift lenses are underrated.
(Although if you only want to use the shift feature, it's effectively just an in-camera crop. You could also just use a super-wide lens and crop in post.)
Crop in post isn't the same. You lose many, many pixels and stretch the ones that are left. Usually the top of the building, too. Been there, done that. Plus, it's the ability to compose in the viewfinder. Love my new Samyang 24mm tilt/shift on my Sony a7Rii.
Exactly. Depth of field is a range of "acceptable" focus, which is subjective, and also depends on the degree of magnification for viewing.
Bingo, this is what I was taught.
I'm sorry Tony, but this should be obvious that you have a "block" of focus with a center peak. Not an in and out border.
The tilt shift lense demo was amazing.
That's freaking awesome. Keep up the good work.
For a given distance setting, there is one plane of shapness - and a zone of acceptable blur. The latter is called depth of field.
Depth of field depends on three factors: magnification, relative aperture and enlargement.
Magnification is calculated using the actual focal length and focusing distance. Lenses with "internal focusing" don't actually change focusing distance, but instead focal length. This has considerable effect on magnification.
The relative aperture is therefore also likely to change, and camera manufacturers may or may not take that into account when they program their built-in depth of field calculators.
The third factor, enlargement, is rarely taken into account. Those age old formulas are calculated for a viewing distance of 3-4 times the diagonal of the final image. A 10x15 cm or 4x6" copy would normally be viewed at a distance of around 20 cm or 8". Today, images are generally viewed on computer monitors with a diagonal of 14-27" at a distance of 10-30" (25-75 cm). Try moving back from your monitor to 3-4 times its diagonal. Things will look less blurry (depending on eyesight, of course).
You really enlighten and add dimension to old concepts. Appreciate your videos.
Best 'focus' subject video as it covers a range of ways of dealing with focal plane issues. Thanks for this valuable presentation.
DOF and "acceptably in focus" make assumptions about viewing distance and the size of the image being viewed. That is why things on a small monitor (e.g. the camera LCD), tend to look in focus even though they do not when viewed on a large, high res monitor. DOF calculators have simply chosen not to include viewing distance and image size in their calculations. They should offer that option.
All I've done for the last week is watch your videos. Thanks Tony!
Such an amazing explanation mate.
Very much appreciated.
Also, how did you get to the calculations?
I chuckled at the way Tony’s “dad” seemed to nod or shook his head at each point Tony made. As for DoF, focus stacking is really the only solution for most of us. 👍
A filter thread mounted "TWO FIELD LENS" can get both the near and distant objects in focus. I got mine for $1 (used). Another solution is to get a crop sensor camera to get deeper depth of field.
If you really want to understand this, look at the concept of circles of confusion. Then you can see that the resolution of your sensor also affects depth of field.
The depth of field was never defined as the distances fully in focus, just "acceptably sharp."
Glad to see you say this. This became obvious when even 36 mega-pixel sensors came out. Zeiss has excellent white papers on their website that explain depth of field fully and correctly. The best thing I learned is the concept of the "circle of confusion", which is the size of an area on your film or digital sensor that is the smallest it can resolve.
yes, it is subjective to a point. And other commenters already pointed to, that the subjectivity is related to one parameter in the DoF formula "Circle of Confusion". In the old days, we use an approximation number for the CoC that we would relate to film format. I don't precisely remember, but it may have been based on 100 LinePairs/millimeter resolution at 35m film format and an 8" x 10" print at 6x diagonal (6 * 12.8" = 76.8") viewing distance. We used a different CoC number for medium format. At 8" x 10", and 8x magnifying loupe on the focusing glass was easier than the formula, we would be focusing, tilting, shifting, swinging, focusing a lot for one shot anyhow. And when in doubt, a Polaroid test shot with a 4" x 5" adapter in the 8" x 10" camera.
In digital, we have ever more resolving sensors, albeit the 100lp/mm in 1975 for color film and very good lenses already was very good. And note, film had grain but did not need wild-assed color guessing for missing colors that today we need with Bayer filtered sensors (that are analog, panchromatic and colorblind to begin with).
So the comparison with film is hard to make, for one because in digital we have raw processing (color guessing) where anti-aliasing may have been facilitated in hardware with a second filter-layer over the sensor, and may again be done in post by raw processing too. As we pixel peep, we ought to adjust the CoC in the DoF parameter according to today's use of images. As most raw shots end up in the interwebs at, say, 1,280 resolution in JPEG we might want to take that into account.
The subjective part of the CoC aside, it clearly relates to the resolution of our lenses and sensors, and the quality of our raw processing. So there are a lot of variables involved here.
Then there is the problem that focal length of a lens is defined at infinity. If we focus closer by we shift the (entire) lens away from the sensor and the mathematical or geometrical or laws-of-physics focal length gets longer. Consequently the image angle gets smaller. Consequently the F-number on the diaphragm control ring is not correct anymore.
(Turn this around and you can see that a "prime" lens that has no focus breathing, is actually a zoomlens that compensates the focusing effect on focal length in terms of image angle. Potentially in that process, the T-stop for the F-stop we had set, remains the same.)
Altogether, the problem with DoF is the DoF calculator app that does not take all these parameters and variables into account properly.
The app may ask you for your camera make and model as well as lens make and model, ignoring view size, firmware version and raw processing. Having realized all that, entered the data to be given the "DoF" by the app, no change the camera for another model with significantly higher or lower resolution. The app I tested, gave me the same distances. Uninstalled.
Idk if this will be addressed, but the way I was taught about depth of field was that the in focus range was the range of general acceptable sharpness/ focus.
Which means "they're more like guidelines anyway." Different purple have different standards of acceptability, but for the average person it's a guideline of the range in which there's a decent chance you'll find the focus to be "close enough."
The important thing is that we keep focused on what's important.
Hahaha that was so punny😂😜
I feel like you are trying to decrease my circle of confusion, but my sensor tells me that’s not possible.
I really believed he would cover the more technical terms of "acceptable sharpness" and circle of confusion in this video
Your last 2 vids ( this and tethering) are exactly what I've been dealing with as I've been stepping up my studio game, thanks!
In a way the argument is mostly about semantics - hence the clickbait-y title. The least confusing way of putting it would be something like this: technically, in any photo, there's only one spot that is in perfect focus, but depth of field is the concept of using the appropriate settings (focusing point and aperture) in order to set the desired distance in which things will look acceptably sharp. So yeah, that last bit is obviously subjective, but then, everything is; even right on the focusing plane, different people consider different sharpness levels acceptable, hence the fact that people read lens reviews. It's been known since the days of film that depth-of-field scales that you can find on lenses are excessively optimistic, to the point that on full frame, you had to substitute f/8 for 2.8 that you would read on the scale, or f/16 instead of 11. The "focus on one third of the distance where you want things to look sharp" tip tends to work better than focusing on the middle of the distance because things in front of the focusing plane look a bit more blurry than those behind it.
Having said all this, I too frequently find out that I used too shallow a depth of field, because it looked fine at the time, on my tiny screen.
This is one of those things that is common knowledge, or heresy depending on the person you ask. I've had people tell me the focus is perfect across the whole depth of field. These are often the people who explain they don't use the camera's meter, but instead shoot in manual, adjusting the exposure until the little scale in the viewfinder says 0. Manual is the only way to be in control of the camera they explain. OH I say backing away slowly in case the disease is contagious...
@@DeputyNordburg lol yes, good example. Loads of tutorials from pro photographers asserting that either you use manual exposure as you described and are a serious photographer, or you are using automation and have less creative control. So apparently when I'm into aperture priority on my mirrorless camera and it gives me a more or less ok baseline exposure to which I can apply exposure compensation depending on what I want - especially since there's a pretty much perfect exposure preview in front of my eye - I am doing it wrong and lack creative control. Plus, it doesn't take five seconds to be ready to shoot any picture when I need to shoot instantly.
Back to useless semantics, there's a popular youtube video called "Lens Compression Doesn't Exist" where a pro photographer explains that tele lenses don't compress, because you can shoot a wide angle lens, then crop the photo to the same frame as the tele and it will look the same, while completely ignoring that the point is how different lenses alter linear perspective of any subject. Apparently your lens choice depends entirely on how much you like walking.
It's all about the acceptable size of your circles of confusion. Look it up on wikipedia. Basically you need to define what's acceptable sharpness and that can vary a lot. Tony's pixel-peaking destroys the concept of DoF.
Exactly. In-focus is when the circle of confusion is smaller than a pixel. The higher resolution of your sensor, the shallower the possible depth-of-field.
The depth of field markings on lenses are made for 35 mm / full frame. If you use the lens on a crop sensor the depth of field is LESS and you need to use about one f-stop smaller aperture marking. Yes, it is less as the magnification from the sensor to the image is greater.
In general the standard depth of field calculations are based on CoC 1/1200 of the sensor width. That is just resolution less than one megapixel.
Not exactly - the DoF markings also make an assumption of a) the size of the PHOTO, and the viewing distance to the photo.
@@gregsullivan7408 Yes, but even with the same size what I said applies.
Teh depth of field markings have an interesting history. A Finnish photographer named Vilho Setälä made them for his Leica lens in the 1920s. Then his camera went to the factory for repair and Leica copied the idea. When he complained he got a free lens as compensation.
Depth of field is a point where things are acceptably in focus but not actually in SHARP focus. The zone changes depending on f-stop used and lens used and how close or far away you are to the camera but there is only one point where you get super sharp focus no matter what. I never figured any different so I was taught correctly.
Fuji bodies have a depth of field indicator on the focus distance scale. Users can choose the standard by which the camera determines what is considered within the depth of field. One of the choices is “pixel level”, which is a more conservative setting that works well to capture images that have acceptable focus within this range. I wish all bodies offered similar functionality.
Canon has a set of relatively new T/S lenses in its EF lineup. (Much newer than the one you used.) It is rumored to have AF-capable T/S RF-mount lenses on its internal road map.
Cropped sensor helps also. Shot ducks taking off from a small pond with a 5D3 and the 400mm prime. Some ducks not in focus. About the same shot with a 70D and all in focus. You have explained that before, but didn't hear you mention it here. Keep up the great work.
Technology has changed the values of viewing distance of print/screen and image size, but has not changes the circle of confusion. So that begs the question for the DOF calculator you used, does it allow setting a value for circle of confusion ant if so what is its setting. But point taken there is only one plane of focus.
What about focus peeking in the camera and showing it that way and stacking focus? I feel a lot of photogrpaher are stacking now especially in product, landscape, and architect.
I have long noticed it already that dof is a 2d line not even in photography but in video where distance can matter less but i was still paying attention on gap distance. Which means i make more sure of distance in photography.
When I first started reading photography books on the subject (or maybe just book, only 1 :) ) depth of field was used to describe levels of "acceptable" sharpness for subjects that were not directly focussed on.
So whatever isn't "in focus", that being elements not focussed on by the camera, can still be "acceptably" sharp. At least, this is how I've learnt (and still learning) it. Nice.
Great, clear, easy-to-understand explanation of the concept of depth of field. I think a tilt/shift lens could be quite useful for photographing small animals at very close distances, such as snakes, lizards, and frogs.
Very informative article and demonstrated the use of tilt-shift lenses.
We are running into a new problem with high megapixel count cameras. Maybe a m4/3 is a good compromise ? ;)
And why there are no tilt/shift cameras instead of tilt/shift lenses? Now it's very easy to tilt and shift a sensor. I had small tilt option in Pentax K-7, but they didn't make a game changer out of this option and didn't give more use than with O-GPS in later models.
The compromise is not really real. You get the benefit by sacrificing the maximum sharpness. Nobody forces to make huge prints of to pixel beep on high megapixel image.
Great info in 4K60. I loved this
To your point about using a tilt-shift-you can tilt the focal plane to gain depth of field, but you radically loose it in other parts of the image. So yeah, awesome for product photography, but it's probably not going to help much for landscape shooters.
Also, Canon (and I believe Nikon) still make 'new' tilt-shift lenses. I just bought the relatively newly released 50mm TS-E last year. Looking forward to when they start making tilt-shifts for mirrorless. Would be nice to see Fuji, Sony, etc. make competitive perspective control lenses...
Lose, not "loose"
Love it! I have always focused on what is most important and the largest area within the frame on landscapes. Where it gets really exciting is the focal length multiplies everything for better or worse thus often picking a compromise or focus stacking on those telephoto shots. Then you have larger sensors which seemingly amplify you either well executed or not so well executed work
every bit of this helps! thank u for making this video for newbies to learn!!
Tony, as always, great video. Could you tell me why, when I put my full frame lens on my crop sensor camera, aside from crop factor, things just dont look right. Thanks
@Joe Trent thanks Joe, your input helps me. Take care and be safe out there.
With the T/S lenses you could have gone down a whole rabbit hole of focal area shapes - something Ted Forbes sometimes examines in his lens reviews. Many older lenses didn't have exactly a focal plane so much as a sorta focal spherical section, and that sometimes made things easier, other times harder. The modern emphasis on absolute corner to corner sharpness, thanks to the incredible resolving capabilities of modern sensors, has made modern lenses far more planar than formerly, which does (at least in my own experience) really make that "focal plane" more of a true shallow plane in practice. In a way, with older glass you could take a different approach and sorta get away with more.
Good explanation. It's not a lie per say, but just an oversimplification. I do a lot of astrophotography and it's a similar concept to the 500 rule. What worked for old low-megapixel sensors doesn't work now with 40 or 50 megapixel sensors, and people are getting pickier with how much trailing is acceptable. If you're making an instagram post, you could probably use a 700 rule but if you're printing for an art gallery, maybe you need a 300 rule. It all depends what your tolerance is and accepting the limitations.
Nice video, greetings from Libya 🇱🇾
Tony, I think you may have goofed in interpreting Nikon's definition of DOF. In the actual experiment you did, you started off with the little doll in the CENTER of the plane of focus, and then moved the larger doll 3 inches back. I believe that means that you pushed the larger doll twice the distance from the center of the field to the back edge of the field. Yes, it is true that this is a gradient, and not a blur-free field, but you might have tipped the scales quite by misinterpreting the 3 inch distance as what you needed to move the doll back from the center of the field, versus being the distance between the front and the back of the field.
I also thought he might have goofed on the three inches deep instead of however much back from the plane of focus. I wonder if his calculator took into account the resolution of his camera sensor too because the difference in sharpness between the two objects would be much more noticeable on a sensor with higher pixel density.
Not surprisingly, Tony taught me something today. Keep 'em comin' please sir.
There's a little camera company that offers built in Image Stacking. I own the smallest camera with it, the Olympus Tough TG6, which uses "in camera stacking" for Macro Shooting. Works like magic: Focus, zoom in if you want, hit the button, it takes 8 pictures and after 2 Seconds you get the finished jpeg picture, crisp sharp in the whole artificial depth of field. right out of the camera. With Nice bokeh in the background.
Yes!!!
“In focus” nothing that isn’t on the focal plane (which is infinitely thin) is in focus.
DOF refers to the area that will “appear” to be in focus at a given reproduction ratio at a given viewing distance. This is particularly important with digital photography where you don’t have to have a dark room to crop a photograph. Something that the photographer’s vision required to be sharp is suddenly out of focus because the photograph was cropped. I have a Linhof camera that has multiple DOF tables for different enlargement sizes.
I don't get it why using the same lens that' focused to the same distance and at the same aperture, I get more DOF on a full frame camera than with an APS-C camera according to DOF calculators?
what happens if a sensor is not flat but also curved ? (flexible or static)
There are indeed curved sensors, at least Leica uses one. The big advantage is not on the DOF nor in the focus control, but in the light projection angle (the photosites receive more light when it comes perpendicular to the sensor surface).
So are you saying all fixed focus cameras like GoPro's, many DJI drones and other action cameras are ALWAYS out of focus?
This is making me want to buy a technical bellows system so bad. ha! Great demo, T!
You do not know how to apply DOF. go learn what Circle of Confusion is and that it takes into account viewing distance and image size. Simply put, you don't know what you're talking about... frequently. P.s. and the phrase has always been "acceptably in focus".
Do smaller sensors have a depth of field advantage? I often end up shooting with something like 200mm f20 (aps-c). I was wondering if the same shot with full frame would be better at all (300mm f30) or would the diffraction make the image softer.
The diffraction hits with APS-C at f/20 where at FF it would hit at f/30. APS-C has no inherent benefit in this.
Tony should also discuss the affect of lens focal length on the depth of field. The affect of reducing the focal length of the lens is greater than increasing the F-stop.
DOF is a gradient within an area that is considered acceptably sharp. Of course whether it's acceptable or not depends on how far you magnify the shot.
Ugh this is killing me. DoF always requires how the output is viewed. I believe all the common calculators assume a normal sized photo held at ordinary viewing distances (so not close up with a magnifying glass or a massive 3 foot poster up close).
Approximately an 8 x 10 inch print approximately 18 inch viewing distance is what the default normally are in a depth of field calculator. If you need a different size print or viewing distance you are expected to change your circle of confusion.
Next Up: A discussion of hyperfocal distance.
Really, it would be excellent to discuss "apparent" focus and "acceptable" sharpness, and the formulas for determining these at different display sizes. And hyperfocal, too, if you want.
I never knew how strong the effect of tilt shift lenses is on the dof... neat and thanks for showing the viewfinder!
Also one thing, lower megapixel cameras like the R6 vs the R5 will simply have less ability to show detail and especially on the DOF area this can hide the "out of focus ness" a bit (on the R6) while the R5 yea... you see it - same goes for subject shake for example. Its not a "solve all" just something i notices when i switched from my 6Dmkii to my new R6 (26->20mp)
My pictures feel... how would i put it... less sharp but more overall in focus, if that makes sense?
My eyes are really acustomed to how a 26mp picture looks in 1:1 on my pc :D
TONY IS THE MAN!!!
I’ve always thought a “plane” was the wrong way to think of how lenses focus. Due to the optics of your lens, shouldn’t it take on a more spherical shape?
Also, your lens focuses on a gradient (in either direction). Your lens focuses on whatever you select and there’s a thin section that stays purely in focus. The F-stop simply controls how fast the focus falls off and becomes blurry. Bigger or smaller gradient.
Hi from France, Tony ! (so excuse my bad english ...) I believe in depth fields calculus and I think the mistake comes that your app use a pixel size larger than your camera. I did the calculation : H = 7' 10", with N = 8 and f = 24 mm imply 28 micrometers pixels while your camera's are under ten if it's a 40 Mp sensor.
No, all the apps use 0.03 mm for full frame. That is the problem as it represents only around 1 megapixel resolution. The problem is that modern cameras and lenses are so sharp and we have better ways to view images that the depth of field really does not hold. That is why you should focus on the most important part of the image, not for example on some hyperfocal distance.
Nothing wrong with the concept of depth of field, need to make sure you are choosing the correct circle of confusion, in the calculations. Can be issues with field curvature as well that will foil the mathematical perfection.
I know nothing about photography other than making cool images
Other factors that increase depth of field are 1)distance from camera. If your subject is far a way, like a landscape, it will all be in focus, even at large apertures. I find the dof calculators seem to get this wrong with the relationship between large aperture and distance from the camera. 2) wide angle / short lenses seem to increase depth of field 3) smaller sensors. I've had consumer "super zoom" cameras that could focus on an object so close it was touching the lens and had good DOF at macro distance, no stitching required - it was because the sensor was miniscule. Smaller than apsc and 4/3rds but with the corresponding tradeoffs you would expect from a tiny sensor. Inferior tonal and color ranges and just forget bokalicious backgrounds.
Exactly!
You could add that there a resolution limits and magnification limits for those depth of field calculations. For example, at what resolution would it have been accurate?
If you've ever seen lazers work and converge, and focal points of lazers, you understand they all come to a *single* point. This makes total sense now.
I'm liking that camera angle! It's interesting! 😀👍
Now i understand. Depth of field is acceptable focus. There will always be a degradation or fall off in sharpness from the focus plane. Thanks for the lesson from video and comments.
First of your videos in a while that caught my interest. 👍
Laowa produce great tilt shift lens , you can tried them .
Great video Tony
Never knew anything about Tilt-Shift lenses before , are they any good for macro of insects etc. when the subject is on a diagonal ?
Are there any with longer focal length and high magnification ?
Cheers
Noel
The canon 90mm tilt shift is a 0.5x macro. When you tilt a lens the plane of focus tilts as well. So if everything you want in focus in one plane but the plane is not parallel to the sensor a tilt lens is perfect. I would recommend renting one and testing it. You can also get a tilt shift converter.
Awesome video. I still massively appreciate you disseminating the aps-c aperture crop factor principle. You guys are a great channel
Thank you very much dear master. I think using Hyper Focal Distance Apps and Focus On Hyeper Focal Distance increase DOF.
The chameleon has overall perfect sharp vision, because it acts like a tilt-shift lens. Nikon would be #1 again, if it offered cheaper kit lenses with tilt-shift options.
Can AI bokeh (used in smartphone) match natural bokeh?If so, then budget/midrange camera market is in denger.
Last part i find it interesting (old school). DoF is more like to have main subject in focus and some of background less blurry to destinguish if in background is a mountain or a blooming branch (landscape). This need practice to see exactly how it works and then can do creative photos
Or your calculator might be wrong. Aren't there DOF guide marks on the lens any more? Or can you decide for your personal criteria, to just use half the DOF range that the lens or calculator shows?
Those guide marks are based on the same algorithms as the calculator uses. There is nothing wrong with the calculator. The problem is with the concept.
This video seems overly clickbaity. I thought it was common knowledge that Depth of Field was just a region of _reasonably_ sharp focus, at least among anyone who has actually heard of DoF.
Welcome to youtube!
Most have heard an incorrect definition of DoF and I have seen incorrect definitions in books going back to the 80's. So knowing how it should be properly defined is important as it may influence whether you do something like focus stacking, etc.
In my interactions with photographers more amateur than myself, this has rarely been the case, and many of them don't really seem to know exactly what "in focus" actually is (e.g. they see any photo that looks better in general than one taken with a phone, and assume it's in focus, because they just aren't used to having to zoom in and check), so I think this is a very useful video for a ton of people.
Really interesting video Tony, thanks!
Had a friend who made small jeweled boxes with granulation and cabochons, about the same time I had a Pentax 35mm camera. To get into juried shows, she needed pix of these boxes that were razor sharp everywhere, front to back, when the frame was completely filled, showing two sides and top of the box. I used the Pentax 50mm compact macro lens, which was cheap. With that basic camera/lens and film, we got ASTOUNDING front to back FOV sharpness in aperture priority, Wide open, only the corner of the box was in focus. As you stopped it down, more of the background got sharp, profoundly so, even in the viewfinder. Period. I've NEVER been able to get anywhere close to that cheap Pentax setup with digital photography. So I agree with you that depth of field with digital photography seems to be a lie, compared to how it was with even my cheapo Pentax film camera. But why?
Anything close to Focal place will be "Acceptably" in focus with higher fstop number (f8, f11). According to what I know the focus is like a gradient in front and behind the focal plane no matter how big your Fstop number is, so eventually the subject even if its a little behind or infront of focal plane will never be in perfect focus!
As always, very interesting and insightful.
I find it interesting that Tony’s ends this demonstration with a tilt/shift lens. View camera users recognize that to gain “maximum “ depth of field you have to adjust the focal plane - not the subject - enter the scheimpflug principal. Yea that’s old school and analoguesish at best but with the flat plane of focus you have in today’s digital cameras you are very limited in how much you can actually “increase “ your depth of field
Tony, does depth of field depend on pixel pitch?
No, when one uses the way it is typically defined. It is based on around one megapixel.
Canon is making autofocus RF tilt shift lenses. I think they will be game changers, especially if they work well enough to be functional portrait lenses with eye af
So depth-of-field really acts more like a focal gradient then
Canon are still making tilt-shift lenses. They refreshed the line in 2017 and the TS-E 135mm f/.4L Macro has been _THE_ product lens ever since. Recent patents and somewhat-substantiated rumours heavily suggest they're gearing up to release the first-ever autofocus tilt-shift lenses. Word is Fuji will be getting some out for their GFX system soon as well. Shift adapters seem to have become quite big business for several third-parties, too. Tilt-shift never went away and isn't "old-school analogue", it's just rarely touched by RUclipsrs, probably for the same reasons lots of channels ignore all-manual brands like Zeiss and Voigtlander. Doesn't mean they're not there.
Samyang are also still making them for various mounts for a reasonable cost. Bought my 24mm tilt/shift for my Sony a7Rii few weeks ago and love it!
This is why I can't use zone focus for street photography, "everything is pretty close to in focus" is not good enough for me. Thanks, Tony.
@Joe Trent I understand and agree but I want subject, composition AND focus.
Great video. Good explanation of the camera function. I Never heard of a Tilt/shift lens
Only CANIKON have Tilt/shift lenses. Sony never made one.
Nowadays some third party lens maker try to COPY those patented designs of Canon and Nikon.
Never knew this aspect of a tilt shift lens! Very interesting!!