this are THE BESE, they are topics that you study the first week you make 3d. they are not "good points" they are the fundamentals, if people studied they would consider them "obvious," not "good points". is the base of 3d and materials.
@@Norman_Peterson ... ?? No offence but what is Your point here ?? Is it that "You know this, so everyone else should ?? You "say" that "these are fundamentals, if people studied".... But if You had been "actively following blender usage" You should know that many if not the majority who are interested in blender (and 3d in general) are not "studying it" or rather they are at least not "studying that part" (photo realism)...They are trying to learn modelling and or animating and or simulation and or .... etc etc. So for us this IS absolutely "good points", if nothing else as "reminders". Now if You are (what You Yourself consider) "a proper student of 3D" this might appear differently. But I would venture to say that You are not the norm. Best regards
19:25 The shutter value is the percentage of the frame duration that the shutter is open for. At 0.5 shutter value and 24 fps the exposure duration is 1/48th of a second. It's similar to the "shutter angle" for a motion picture camera, but expressed as 0-1 instead of an angle.
@@KabeeshS They're different units representing the same thing - shutter speed. You have to imagine rotating disk with angled opening in front of the film/sensor and when you have 180 degrees set then it's half opened (because full circle is 360 degrees). It spins once for every frame, so if the light passes through for only half of this spin then for 24fps it is 1/48s. This comes from old film cameras.
Andrew my man, you're absolutely a beacon of knowledge in the 3D world. Not enough thanks can be given to let you know how much good you've done. On another note : when wearing chinos that tight, absolutely empty your pockets.
Was really nice to return to this subject! A few notes: there is a way to have multiple planes of focus in a real camera by using a split diopter, which some films have used to the same effect as in Toy Story 4, so even though it seems to go against practical real world situations with a real camera, it's actually a real effect. Metallic surfaces do have albedo, it's just that most of the time those albedo values are quite dark. Pure silver for example has a fairly bright albedo if you cross polarize all the reflections away. Anamorphic lens actually do the opposite of what he described, they capture twice the width in a compressed/squeezed format, and you stretch it back out/desqueeze it in post. The ovals are produced by cylindrical lens elements at the end of the chain of lens elements, as opposed to spherical elements in lenses with circular bokeh.
That default 0.5 motion blur isn't "made up" at all - It is half the frame rate (also known as a 180degree shutter). A bit like the specular slider on the principled BSDF - you don't want to change this. I think Andrew is coming at this from a stills photography background, where adjusting shutter speed to account for light is acceptable in most cases - however this is not so for moving image. One of the most common mistakes a novice video creator makes is to shoot with a random shutter speed. You should always aim to shoot with a shutter speed half that of the frame rate you are shooting at if you want natural looking motion - Change the ISO, aperture and amount of light in your scene to compensate for exposure, not shutter speed when shooting motion.
@@FrancescoSpace It's half, as in half the time. Shutter speeds on stills cameras are labeled in fractions and so half the time is double the fraction, which i think is what confused you?
@@KabeeshS a 180degree shutter at 24FPS would be equivalent to 1/48. In over words 1/48 is half of 1/24. ...but 1/50 would be close enough for most people.
These are great points! I remember using the camera tricks majority of the time, but nothing can take away from the use of lighting to create cinematic scenes.
Of all of this, and I learned a lot from it, my absolute favorite part was the tiny little bit you spoke about Dune. I was GENUINELY wondering what they did to get that very natural motion blur. Good to know! Really appreciated that tiny little tidbit.
wow, I started with Blender years ago, with a version that today people call vintage (earlier than 2.69). There was not much tutorials that time, community was in early stage, most of the stuff u had to figured out on your own. Despite all the struggle, I enjoyed it very much, and at the same time was quietly dreaming that one day will be able to buy this expensive studio-standard software. Even though for the last few years I neglected Blender, been following all the new versions that's been released, and also learned so much from Andrew's videos. Still enjoy modelling and now even getting into animation. I must say, it's amazing how the community has grown over the years and overwhelmed by changes that all developers provided. I'm too old to find a job in industry, but it's so heartwarming that Blender gives so much opportunity to all people that are interested in 3d and digital art - for free. You guys are truly changing lives, many thanks and happy blending!
i came across this guy a couple of years ago duri ng my research and from him i have learned a lots and it is also exiting that he always have something to share. We appreciate this guy.
Some observations from a photographer's perspective: The Bokeh is not related to scale of the subject per se, it is a function of the focal distance and the size of the sensor. All other things being equal, a phone sensor will get bigger depth of field, while a medium-format camera gets shallower depth of field. Photographers talk about the medium-format or large-format 'look', which usually means the shallow depth of field with more of the subject in the frame. There's a 'hack' to achieve this look with a regular camera, called the Brenizer method, which is basically faking a large sensor by taking the subject in manual mode with a large aperture, then taking lots of photos in strips around the subject and stitching them all together in Photoshop. The other thing of note: You mentioned the Exposure Triangle and ISO, explaining that ISO 'forces' more light onto the sensor. This misconception comes from every educational photography site or educational video which without fail brings out the Exposure Triangle, and in order to have it make sense, explains that ISO controls the 'sensitivity' of the sensor. Yes, I see this written and said all over the place, even though it is patently false. Exposure consist of Aperture and Shutter Speed ONLY. For starters, digital ISO is NOT the same as film ISO. Camera makers deliberately created this confusion in the transition to digital to entice professional photographers over using concepts they already knew. After all, the first digital cameras simply swapped out the film back for a digital one with a sensor in place of film, hence “full-frame” since the camera body was made for 35mm film. So what is digital ISO? It is a post-exposure signal boost, a combination of analog and digital boosting straight off the sensor, and the noise is a function of the signal to noise ratio. High ISO doesn’t necessarily mean more noise, paradoxically. What ISO does is allow the photographer to deliberately UNDEREXPOSE and the ISO is an internal real-time compensation by boosting the signal, or lightness of the image AFTER the exposure. BTW it is ALWAYS better to use high ISO versus underexposing at low ISO and boosting ‘exposure’ in post when it comes to image noise and detail. In the interest of ‘dumbing down’ and being subjected to the same misinformation themselves in the past, these educational sites perpetuate the myth that ISO controls sensor light sensitivity, and like film there’s a direct correlation between high ISO and noise. In reality digital sensors can’t, and never could, change their ‘sensitivity’. Once the shutter is closed and the sensor is read, THEN digital ISO is applied. The deep-seated belief in the Exposure Triangle drives this misconception despite them knowing better. The Exposure Triangle is not holy writ: the photographer Bryan Peterson first described the concept In his 1990 book “Understanding Exposure” where he called it the Photographic Triangle. As described, he is absolutely correct. Later, others renamed it the Exposure Triangle. All three settings are related as they all use the same logarithmic scale; halving or doubling of one value halves or doubles the brightness of the image. But that’s not Exposure, even though in the days of film it related to the sensitivity of the film. The term ISO (confusingly) was named after the standards organisation itself where the standard is defined (old-timers will remember film speed being called ASA for the American Standards Association); there are several standards, depending on the type of film, such as B&W, Colour negative, or Slide film. Digital ISO is an entirely different standard, which aims to be compatible in use to the old film ISO standards by using the same logarithmic scale and being defined relative to the brightness of film at the same ISO setting. Basically, the confusion is quite deliberate on the part of the camera manufacturers during the transition to digital, as they wanted to entice professionals over who were set in their ways.
As soon as he mentioned that more ISO = more noise I thought the same thing 🤔 I was gonna comment on it until I saw your deep explanation 😎 but yes, in a brief: a underexposed image could have more noise when increasing the exposure in post processing to match a high ISO image 🤓 unless you have an noise invariance camera, in which case you will always get the same amount of noise 😎
@@ngonjuan there are no *true* ISO invariant cameras, the camera makers have made great strides in minimising noise by various means, including reducing analog noise in circuitry and having more than one analog-to-digital converter which kicks in at a certain ISO and drops the noise level right down, so for instance the noise at ISO 800 may be less than at ISO 640 in the case of my Fuji, with the effect at a certain range of ISO, you won’t notice much increase in noise. Sony has been at the vanguard of this, the advantage of being a large electronics company and major sensor manufacturer.
From a visual artist and photographers perspective, light is everything, how we manipulate it using the inverse square rule is important for exposure, it is important to understand it to realistically light things.
Management of the exposure is SO underrated in CG. It is pretty much exploited in 2D (especially oil painting, referred to as “value grouping”), and it gives such a lovely naturalistic look that it’s a shame it’s not used more.
Few minutes in and I already know this is the best presentation from BCON 22....Andrew Price is not only a great Blender artist but also a great teacher!
Thank you so much, for those basics. You 'll make me dive into photorealism one of these days for sure, just as you did for our Blender immersion and our wish to teach it in school of architecture. Brilliant.
To add for people interested: Glare is the reflection between two mediums within the camera lenses. As camera lenses consist of various pieces of glass, each of them designed to help with some aspect of light refraction, the incoming light is reflected at the glass to glass boundary. You can see the same effect on windows with multiple glass panes. Lense flares are created by the same principle (the real ones, not the PS plugin ;D ). They are glare from such extremely strong light sources, that you see the boundaries of certain lense elements on the image. That's why every lense has a different looking lense flare. Also don't confuse glare/lense flares with bokeh, that's an entirely different principle, but at times looks similar. Bokeh is unfocused light, where as glare/lense flares can occure perfeclty fine in focus. Even the human eye would have a glare, but usually we're not able to look at light that bright to "see" that effect. (Please don't try it! You can actually hurt your eyes.)
I think it would be okay to touch specular on materials, but only doing so if you know the IOR. In most cases, dielectrics can be kept at .5/.425 (depends on who you ask haha), but metals should definitely have their specular changed. This is what helps bring out the tint desaturating along the Fresnel reflection.
I love and study phyiscs, because no matter what it is an inescapable part of our reality, and this entire talk whether you like it or not is a physics lecture! I was just waiting for the inverse square law to be mentioned.
I also often get confused by the aperture science. Joking aside, this video helped me with a texture I had trouble with. I went and applied his advice and it immediately looked 100% better.
These are things we instinctively know and can reproduce but never really think to break down; the light fall-off from the match from the fingertip to the knuckle blew my mind a bit. Especially how the fall-off percentage reduces the further the objects are from the origination point.
Depth of field is the portion in focus. So wider aperture, lower f stop, shorter depth of field (you get the ff and bg blurred. The opposite: smaller aperture, bigger f stop, longer depth of field. Anything is in focus.
I really appreciate that Pixar treated Toy Story 4 with the dignity that was owed to such a groundbreaking franchise. I’m sure those working on the technical/visual aspects of the film consciously chose to treat the task as an envelope-pushing challenge, rather than a “make-a-quick-buck” chore that most sequels fall victim to.
awesome info and presentation! I didn't know about the polarizing light and photo idea to take a more neutral photo of materials, that's really nifty!!
Small correction at around 7:25. This would be called scattering scientifically. Refraction specifically, is the change in angle a "light ray" experiences when moving between two media of different optical density.
Ngl, toy story 4 was a good choice even though it wasn't live action, the amount of effort and realism that went into it was amazing. Would be hard for a spin-off of 5th movie to top.
Great presentation, thanks. Love hearing 3D guys talk about how cameras work in the real world! I make animated and live-action commercial films so use both cinema cameras and 3d software and I loved the principles shared here as well as watching Andrew trip over some of the finer details of real world photography/film making 🤣 Despite this, I learned a lot from this and I know Andrew's scenes are infinitely more photo-real than anything I have created! lol
The same amount of light falls onto a camera sensor per unit size per exposure. There is no extra light 'forced' onto the sensor. The 'grain' increase is NOISE created through the amplification of the signal. Every camera sensor has what's called NATIVE ISO (Nikon Z7 for eg is 64) so everything after that is amplification. Digital sensors don't magically increase or decrease their pixel size hence the need for amplifying the signal. Speaking of real film, the faster the film, ie. more sensitive, the larger the physical grain crystals on the emulsion to gather more light. That's why grain was more apparent with faster film.
Amazing talk, everyone wanting to get into this industry should start with this video. Does a great job at teaching basic principles and concepts that are very important to keep in your knowledge base at all times. He's basically saving you from making hundreds, if not thousands of mistakes. It took me 5 years+ to learn these things on my own, you got it in 24mins. edit: If you took every topic in this video and went off to study them in depth, you could very quickly become proficient at this.
QUESTION: when he talked about how materials are divided into non-metallic and metallic, I was wondering what’s about glass? What property does it hold compare to the others? Such a great talk from Andrew!!!
One humble correction: the "0.5" setting means "180-degree shutter angle", which gives you the cinema standard motion blur. It's not random. RSMB uses the same notation.
"which gives you the cinema standard motion blur" - only if you're rendering at 24 fps, right? There's no sane reason you'd want exactly 0.5 / 180 degree shutter if you're rendering at more common digital framerates such as 30 and 60 fps. The motion blur would NOT look like what people are used to from movies, so what would be the point of using that arbitrary setting?
@@forasago No, Forasago, a 180-degree shutter angle at 60fps and 30fps would give you shutter speeds of 1/120 and 1/60 respectively, which means it would still give the cinema standard motion blur, but the higher framerates would give you a non-standard sense of motion / the feel would still be cinematically "wrong" or TV-esque or video game-esque (unless you play them back in slow-motion at 24fps). I've performed these experiments many ways in AE, using my GH2 and Reelsmart Motion Blur.
@@EdNorty "No, and here's why you're right." If you're going to agree with me (that 180 degree shutter speed will NOT feel like a movie at any framerate other than 24 fps) why start with a No? I don't see a reason why anyone would use 180 degree shutter at non-movie framerates. Do you? For games it actually seems more reasonable to use a 360 degree shutter speed. After all what we're really after is eliminating the feeling that there's something missing / being skipped. A 24 fps movie is already very blurry with 180 degrees and I would guess there are also technical limitations preventing the shutter from being open the entire time. But game engines don't have this limitation, and higher framerates end up looking very sharp even when you crank the blur up. I have tested 360 degrees in Unity (at 144 fps locked on a 144 Hz display) and it looks nice.
@@forasago Because I'm not agreeing with you. I'm saying there's a distinction between the motion blur and the felt sense of motion. If you're, for example, shooting a telenovela at 60fps, the 180-degree shutter angle standard would mean you will use 1/120. If it's a Hobbit movie at 48fps, shooting at the 180-degree shutter angle standard would mean it's 1/96. Anything higher than the 180-degree angle, it gives you a jittery action-scene look and anything lower, a dreamy echo-ey look, regardless of whatever frame rate you're shooting at. But regardless of using the 180-degree angle, if you're not shooting 24fps, it still wouldn't give you the traditional feel. TL,DR: If someone shot a RUclips video at 60fps and used 1/5000 for some reason, it'll just look worse than if it had been shot at 1/120.
@@EdNorty "Anything higher than the 180-degree angle, it gives you a jittery action-scene look and anything lower, a dreamy echo-ey look, regardless of whatever frame rate you're shooting at." This is obviously false. You always have less blur at higher framerates since less movement is actually happening per frame. Just imagine you're recording at 1000 fps, there would be no blur to perceive at all, no matter the shutter speed.
20:01 I think that "Shutter" value is equivalent to what they call "shutter angle", just measuring the angle in full rotations instead of usual degrees. You can imagine it as representing what percentage of a disc that spins once per frame placed in front of the image sensor of the camera, is cut out to allow light in. In other words, it's equivalent to exposure time, but using units proportional to the frame-time (1 / frame-rate; in other words, the higher the frame rate, the smaller the exposure time is).
It's very generous of Andrew to grace us with his wisdom whenever he's not too busy shilling NFTs and complaining about how he can't say slurs anymore.
18:50 what he's describing is a lot like how LightWave did depth of field back in 1990, it would just render the same shot from several slightly different viewpoints centered around the focal point and averaged the pixels so that items at the focal point are sharp but the further details got the more they were blurred. IIRC it only did 4 or 8 or so shots, it doesn't exactly create nice bokeh.
"Leave specular at 0.5" is just plain wrong. It's right for a theoretical perfect surface, which makes it a bit funny as he mentioned porous surfaces - those are not theoretical perfect anymore 😀Some of those specular reflections will end up getting absorbed by the material and never show up as super rough reflections at all; there is a loss of specular energy that will not happen with specular remaining at 0.5. Disney calls this slider an "artistic value" - if the reference material look less reflective (as a whole or in microscopic spots), just lower the specular. Similarly, although not necessarily much applicable to everyday materials, some can have a coating on them that raises the reflectivity. Just as you can have an anti reflective coating on glasses. And specular set to 0? I use this for shadow gaps, i.e. in the cracks between wooden floor boards that will remove specular reflections completely rather than do a very rough reflection. Also, refraction is not absorption. Absorption is the correct umbrella term (vs reflection), and refraction *can* be one of the outcomes but doesn't have to be. An object either reflects or absorbs based on the chance of bouncing off the surface, and refraction is one of the possibilities that can happen as part of absorption. The unity/unreal (?) talk video ages ago describes this far better. Not only metals can reflect non whites. I know what he's getting at, but it's not technically a correct statement. For game stuff, yeah, absolutely true that usually this is good enough for dielectrics. But there are reasons metallic workflow (which only allows colored reflections for metals) is known to have this limitation. Principled has a specular tint that can allow for inter-reflections to pick up base color, but even this is not sufficient to cover all cases. Some of these tints are due to special optical effects, like thinfilm, iridescence, naturally occurring pearlescence, opalescence and so on. So *not always* white. Kudos for getting bump distance more properly set during Suzanne demo, most wrongly just reduce bump effect. Should have been mentioned though. Others have covered shutter speed. I'll just throw in that for better control, you can use cryptomatte/material id/object id to control various levels of denoising and glare effects. This is all just nitpicking though. Great presentation overall and a thumbs up from me. Hmm, I haven't seen Toy Story 4 yet. Damn, that's some fantastic looking anamorphic bokeh.
I actually agree with this. If you're trying to match reference and the roughness value doesn't make it as rough or smooth as it should, *then* change specular value (since it's probably an edge case). Just don't start with it. As disney says, it should be mostly used as an artistic slider.
Your shadow gaps description doesn't make any sense in a physically based workflow. I'm sure it works just fine, but Andrew is, I would assume, trying to describe a more physically correct workflow. For most materials, specular should remain at 0.5. Surfaces that are, for example, porous are the exception to the rule.
@@spaceman-pe5je As for shadow gaps, check the thumbnail for the video "Two FREE and Cheap Ways to Fix Gaps In Your Floor". There is (also specular) energy loss as a result of geometry, geometry that isn't there when we fake it with a flat plane. Somewhat in the connecting gap, and completely in the butt gap. I can also create an actual surface with microgeometry (say tiny bent microtubes like in rubber) that will loose energy that cannot be replicated with using specular 0.5, even if the theoretical value is 0.5. Physically based is not the same as physically correct, it doesn't account for everything observed. Even just using normal maps on a flat plane don't actually shadowing and masking properly like using microdisplacement would. The takeaway is observe what is actually happening and USE the tools at hand to get that effect, rather than insist on using theoretical values from some IOR table. Using slightly wrong IOR (controls F0 reflectivity, always around 4%, say plus minus 1%) for a fully absorbent material isn't going to be what makes or brakes the render. Not accounting for shadow gaps and actual observed energy loss will. As for model every plank to incorporate shadow gaps the correct way? This is not something I can afford in my work (big office venues).
@@ShankarSivarajan Other than to smooth out sharp transitions and for for antialiasing purposes, not really. In theory. Of course, anything is legal for artistic choices, so...
For the 'simulating clipping' part, that is probably correct for simulating standard/older photography but is far less so with modern high dynamic range cameras and HDR video. So you would need to be very clear on the simulated limitations you are choosing.
Not only a good video for beginners but also has reminders for the more advanced user. Mind though, that photorealism is a choice and not always a must. Sometimes photorealism is not a good idea, think about the uncanny valley or about production time. I was always interested in photorealism until I recently realized these two downsides. You can never make it realistic enough, and if you try it, it's very time-consuming (to come to a production). Photorealism is a good idea when your client wants a photo-realistic representation of his product in the ad, or as a hobby for example. I think many of us focus too much on photorealism by default.
As a photographer/videographer, my guess as to what the value for shutter speed would be is the general rule for video of shooting one over double your frame rate. So if you're shooting at 24 FPS (Movie standard), then your shutter is 1/50 of a second. So in essence, your frame rate is 0.5 of the shutter value, which is the default on Blender.
It's confusing that Moonlight is around 4,000K (Normal White). But in the Kelvin scale, it's a bit more in the left warm light (Tungsten) area while a typical blueish Moonlight is more in the right side (Daylight) area.
I really wanted to hear andrew talk about a new topic like ai or at least not about something he’s already talked about in his RUclips channel. Good talk anyway!
He's got some great points, I think he's gonna be big in the Blender community someday.
he is already, his official channel is blender guru
@@aditya.k7543 it was a joke
Lol
this are THE BESE, they are topics that you study the first week you make 3d. they are not "good points" they are the fundamentals, if people studied they would consider them "obvious," not "good points". is the base of 3d and materials.
@@Norman_Peterson ... ?? No offence but what is Your point here ?? Is it that "You know this, so everyone else should ?? You "say" that "these are fundamentals, if people studied"....
But if You had been "actively following blender usage" You should know that many if not the majority who are interested in blender (and 3d in general) are not "studying it" or rather they are at least not "studying that part" (photo realism)...They are trying to learn modelling and or animating and or simulation and or .... etc etc. So for us this IS absolutely "good points", if nothing else as "reminders".
Now if You are (what You Yourself consider) "a proper student of 3D" this might appear differently. But I would venture to say that You are not the norm.
Best regards
I like the part where guru said "it's blender time" and blended all the audience.
really is one of the blend
I like the bit where people regurgitate the same old shite
surely one of the comments of all time
unironically too, why not
@@phutureproof yeah old patter
This is truly one of the comments of all time 🔥
Truly one of the blends of all time
19:25 The shutter value is the percentage of the frame duration that the shutter is open for. At 0.5 shutter value and 24 fps the exposure duration is 1/48th of a second. It's similar to the "shutter angle" for a motion picture camera, but expressed as 0-1 instead of an angle.
just wanted to add the same note here: 0.5 in blender would be a 180° shutter angle
Yeah I really shoulda researched that more 😅 always assumed it was arbitrary. Thanks for correcting me.
@@blenderguru great presentation btw!
I didn't understand this part, what's the difference between an angle and the shutter speed?
@@KabeeshS They're different units representing the same thing - shutter speed. You have to imagine rotating disk with angled opening in front of the film/sensor and when you have 180 degrees set then it's half opened (because full circle is 360 degrees). It spins once for every frame, so if the light passes through for only half of this spin then for 24fps it is 1/48s. This comes from old film cameras.
The motion blur checkbox is actually there if you just want Blender to crash when rendering.
Is it possible the cause of crashing is the lack of good amount or speed of the hardwares?
That, and lack of sufficient memory space @@English_to_Persian
All hail the Donut King !!! 🍩
😆'ALL HAIL!!!....
😮😀🙂👧🧔🙆♂️🤠😎'HAZAA!!!
Donut boy is best boy.
Hear ! Hear!
LONG LIVE THE KING!
NFT Donut King I suppose
Andrew quickly letting everyone know that he's excited about the laser pointer was the most Guru thing ever.
That and his amazing ability to make you feel like blender *IS* understandable
@@pierrec3531 lol
Andrew my man, you're absolutely a beacon of knowledge in the 3D world. Not enough thanks can be given to let you know how much good you've done.
On another note : when wearing chinos that tight, absolutely empty your pockets.
Definitely to both points.
Lmao
"Is that a doughnut in your pocket or you're just happy to be here?"
I didnt even notice that lmao
Congrats to Mr Price for being so successful in the blender community
I was actively working on a project while watching this and those little things he said to do made a giant difference already
broo same here i was like "woaahhhh"
That "lamp-face-focus" example is going to change my 3d renders understanding on its own. Thanks a lot for sharing this amazing talk :]
Was really nice to return to this subject!
A few notes:
there is a way to have multiple planes of focus in a real camera by using a split diopter, which some films have used to the same effect as in Toy Story 4, so even though it seems to go against practical real world situations with a real camera, it's actually a real effect.
Metallic surfaces do have albedo, it's just that most of the time those albedo values are quite dark. Pure silver for example has a fairly bright albedo if you cross polarize all the reflections away.
Anamorphic lens actually do the opposite of what he described, they capture twice the width in a compressed/squeezed format, and you stretch it back out/desqueeze it in post. The ovals are produced by cylindrical lens elements at the end of the chain of lens elements, as opposed to spherical elements in lenses with circular bokeh.
That default 0.5 motion blur isn't "made up" at all - It is half the frame rate (also known as a 180degree shutter).
A bit like the specular slider on the principled BSDF - you don't want to change this.
I think Andrew is coming at this from a stills photography background, where adjusting shutter speed to account for light is acceptable in most cases - however this is not so for moving image. One of the most common mistakes a novice video creator makes is to shoot with a random shutter speed. You should always aim to shoot with a shutter speed half that of the frame rate you are shooting at if you want natural looking motion - Change the ISO, aperture and amount of light in your scene to compensate for exposure, not shutter speed when shooting motion.
You should shoot at double the frame rate actually.
@@FrancescoSpace It's half, as in half the time. Shutter speeds on stills cameras are labeled in fractions and so half the time is double the fraction, which i think is what confused you?
@@hanktremain oh right
@@hanktremainso meaning, if it's 24fps project, then the shutter value at 0.5 makes it 1/50th of a second right?
@@KabeeshS a 180degree shutter at 24FPS would be equivalent to 1/48. In over words 1/48 is half of 1/24.
...but 1/50 would be close enough for most people.
These are great points! I remember using the camera tricks majority of the time, but nothing can take away from the use of lighting to create cinematic scenes.
Of all of this, and I learned a lot from it, my absolute favorite part was the tiny little bit you spoke about Dune. I was GENUINELY wondering what they did to get that very natural motion blur. Good to know! Really appreciated that tiny little tidbit.
That slight nervousness in the beginnings. Just goes to show how passionate Andrews is about CG :))
The man the myth the legend, Andrew Price
Damn Andrew is so happy; you can see it in his pocket.
Always pleasure to listen to Andrew's speeches, thank you for uploading
Even after all these years you're still sharing the knowledge. Thanks for everything you've done for the community Andrew 👍🏼
The man that brought me over to Blender way back when, thanks again. 👍
Great presentation. People keep forgetting that everything is reflective, more or less.
wow, I started with Blender years ago, with a version that today people call vintage (earlier than 2.69). There was not much tutorials that time, community was in early stage, most of the stuff u had to figured out on your own. Despite all the struggle, I enjoyed it very much, and at the same time was quietly dreaming that one day will be able to buy this expensive studio-standard software.
Even though for the last few years I neglected Blender, been following all the new versions that's been released, and also learned so much from Andrew's videos. Still enjoy modelling and now even getting into animation. I must say, it's amazing how the community has grown over the years and overwhelmed by changes that all developers provided. I'm too old to find a job in industry, but it's so heartwarming that Blender gives so much opportunity to all people that are interested in 3d and digital art - for free.
You guys are truly changing lives, many thanks and happy blending!
i came across this guy a couple of years ago duri ng my research and from him i have learned a lots and it is also exiting that he always have something to share.
We appreciate this guy.
The reason I love Andrews (Andrew Krammer, Andrew Price); Explained nicely & easily! 💚
WHATSUPGUYS AND DREWWWWWWWWW KRAMER HERE
Andrew Kramer is a Cinema 4D user..😄😄😄
@@sam-qu1qe Maybe; but his tutorials are lit 🫡🫡
Some observations from a photographer's perspective: The Bokeh is not related to scale of the subject per se, it is a function of the focal distance and the size of the sensor. All other things being equal, a phone sensor will get bigger depth of field, while a medium-format camera gets shallower depth of field. Photographers talk about the medium-format or large-format 'look', which usually means the shallow depth of field with more of the subject in the frame. There's a 'hack' to achieve this look with a regular camera, called the Brenizer method, which is basically faking a large sensor by taking the subject in manual mode with a large aperture, then taking lots of photos in strips around the subject and stitching them all together in Photoshop.
The other thing of note: You mentioned the Exposure Triangle and ISO, explaining that ISO 'forces' more light onto the sensor. This misconception comes from every educational photography site or educational video which without fail brings out the Exposure Triangle, and in order to have it make sense, explains that ISO controls the 'sensitivity' of the sensor. Yes, I see this written and said all over the place, even though it is patently false. Exposure consist of Aperture and Shutter Speed ONLY. For starters, digital ISO is NOT the same as film ISO. Camera makers deliberately created this confusion in the transition to digital to entice professional photographers over using concepts they already knew. After all, the first digital cameras simply swapped out the film back for a digital one with a sensor in place of film, hence “full-frame” since the camera body was made for 35mm film.
So what is digital ISO? It is a post-exposure signal boost, a combination of analog and digital boosting straight off the sensor, and the noise is a function of the signal to noise ratio. High ISO doesn’t necessarily mean more noise, paradoxically. What ISO does is allow the photographer to deliberately UNDEREXPOSE and the ISO is an internal real-time compensation by boosting the signal, or lightness of the image AFTER the exposure. BTW it is ALWAYS better to use high ISO versus underexposing at low ISO and boosting ‘exposure’ in post when it comes to image noise and detail.
In the interest of ‘dumbing down’ and being subjected to the same misinformation themselves in the past, these educational sites perpetuate the myth that ISO controls sensor light sensitivity, and like film there’s a direct correlation between high ISO and noise. In reality digital sensors can’t, and never could, change their ‘sensitivity’. Once the shutter is closed and the sensor is read, THEN digital ISO is applied. The deep-seated belief in the Exposure Triangle drives this misconception despite them knowing better.
The Exposure Triangle is not holy writ: the photographer Bryan Peterson first described the concept In his 1990 book “Understanding Exposure” where he called it the Photographic Triangle. As described, he is absolutely correct. Later, others renamed it the Exposure Triangle. All three settings are related as they all use the same logarithmic scale; halving or doubling of one value halves or doubles the brightness of the image. But that’s not Exposure, even though in the days of film it related to the sensitivity of the film. The term ISO (confusingly) was named after the standards organisation itself where the standard is defined (old-timers will remember film speed being called ASA for the American Standards Association); there are several standards, depending on the type of film, such as B&W, Colour negative, or Slide film. Digital ISO is an entirely different standard, which aims to be compatible in use to the old film ISO standards by using the same logarithmic scale and being defined relative to the brightness of film at the same ISO setting. Basically, the confusion is quite deliberate on the part of the camera manufacturers during the transition to digital, as they wanted to entice professionals over who were set in their ways.
As soon as he mentioned that more ISO = more noise I thought the same thing 🤔 I was gonna comment on it until I saw your deep explanation 😎 but yes, in a brief: a underexposed image could have more noise when increasing the exposure in post processing to match a high ISO image 🤓 unless you have an noise invariance camera, in which case you will always get the same amount of noise 😎
@@ngonjuan there are no *true* ISO invariant cameras, the camera makers have made great strides in minimising noise by various means, including reducing analog noise in circuitry and having more than one analog-to-digital converter which kicks in at a certain ISO and drops the noise level right down, so for instance the noise at ISO 800 may be less than at ISO 640 in the case of my Fuji, with the effect at a certain range of ISO, you won’t notice much increase in noise. Sony has been at the vanguard of this, the advantage of being a large electronics company and major sensor manufacturer.
@@msandersen alright, my bad there. Great to learn something new about my cameras! 🤓
I ain't reading all that
"it is a function of the focal distance and the size of the sensor"
not actually true. its also a function of the lens aperture.
From a visual artist and photographers perspective, light is everything, how we manipulate it using the inverse square rule is important for exposure, it is important to understand it to realistically light things.
Management of the exposure is SO underrated in CG. It is pretty much exploited in 2D (especially oil painting, referred to as “value grouping”), and it gives such a lovely naturalistic look that it’s a shame it’s not used more.
Few minutes in and I already know this is the best presentation from BCON 22....Andrew Price is not only a great Blender artist but also a great teacher!
Meh." We can do that with Geometry Nodes..." by Simon Thommes was amazing just as his tutorial series on blender studio.
15:53 the irony of the mic clipping as Andrew says "clip"
Thank you so much, for those basics. You 'll make me dive into photorealism one of these days for sure, just as you did for our Blender immersion and our wish to teach it in school of architecture.
Brilliant.
To add for people interested: Glare is the reflection between two mediums within the camera lenses. As camera lenses consist of various pieces of glass, each of them designed to help with some aspect of light refraction, the incoming light is reflected at the glass to glass boundary. You can see the same effect on windows with multiple glass panes.
Lense flares are created by the same principle (the real ones, not the PS plugin ;D ). They are glare from such extremely strong light sources, that you see the boundaries of certain lense elements on the image. That's why every lense has a different looking lense flare.
Also don't confuse glare/lense flares with bokeh, that's an entirely different principle, but at times looks similar. Bokeh is unfocused light, where as glare/lense flares can occure perfeclty fine in focus.
Even the human eye would have a glare, but usually we're not able to look at light that bright to "see" that effect. (Please don't try it! You can actually hurt your eyes.)
I think it would be okay to touch specular on materials, but only doing so if you know the IOR. In most cases, dielectrics can be kept at .5/.425 (depends on who you ask haha), but metals should definitely have their specular changed. This is what helps bring out the tint desaturating along the Fresnel reflection.
that's new thanks
Photo textures and photoscans with baked-in lighting often look better with a lower secularity level too.
Proper lighting is a scinece !
One of the most informative vids about lighting here on RUclips. Genius donut guy
@Blender Guru Great Presentation, mate! I love how all this is capable within a awesome free software @Blender
I love and study phyiscs, because no matter what it is an inescapable part of our reality, and this entire talk whether you like it or not is a physics lecture! I was just waiting for the inverse square law to be mentioned.
Wounderful presentation. Thanks andrew for all your amazing tutorials and work you are doing for the 3d world you are a true inspiration to me.
Im really started my job with Andrew and still keep it going with him always nice content Im happy to see him every time ❤️
I will say Lighting for Beginners is the best series by blender guru. Highly Recommend.
I also often get confused by the aperture science.
Joking aside, this video helped me with a texture I had trouble with. I went and applied his advice and it immediately looked 100% better.
These are things we instinctively know and can reproduce but never really think to break down; the light fall-off from the match from the fingertip to the knuckle blew my mind a bit. Especially how the fall-off percentage reduces the further the objects are from the origination point.
After making my donut, I feel like I know this guy personally.
What an amazing lecture. I basically knew about most of these principles from photography but didn’t put them into consideration when doing 3d renders
such a great presentation, learned a lot
Depth of field is the portion in focus. So wider aperture, lower f stop, shorter depth of field (you get the ff and bg blurred. The opposite: smaller aperture, bigger f stop, longer depth of field. Anything is in focus.
I really appreciate that Pixar treated Toy Story 4 with the dignity that was owed to such a groundbreaking franchise. I’m sure those working on the technical/visual aspects of the film consciously chose to treat the task as an envelope-pushing challenge, rather than a “make-a-quick-buck” chore that most sequels fall victim to.
This was great! I appreciated the concise explanation on light sizes and falloff, very helpful!
WOW! Was not aware of the scale factor when dealing with Depth of Field!!
Holy crap professional lighting of stage. Video great and clear
when he whipped out a donut and said "how's it blending?" i felt that
awesome info and presentation! I didn't know about the polarizing light and photo idea to take a more neutral photo of materials, that's really nifty!!
Small correction at around 7:25.
This would be called scattering scientifically. Refraction specifically, is the change in angle a "light ray" experiences when moving between two media of different optical density.
oh, toy story 4 doesnt just use different depth of field. it use different lenses! They did mimic a Split Diopter lens. It is actually very clever
It do use different depth of field since they change the aperture of those emulated lenses :)
@@gurratell7326 yes!
Wow the 3D artists really nailed the design of that presenter.
Thank you for the lecture! I learned a lot of interesting things for myself. A useful presentation. Thanks.
Cannot Thank you enough man.. completed my first donut 🍩 bit proud of myself
But you were my beacon of hope..
Thank you
Ngl, toy story 4 was a good choice even though it wasn't live action, the amount of effort and realism that went into it was amazing. Would be hard for a spin-off of 5th movie to top.
Great presentation, thanks. Love hearing 3D guys talk about how cameras work in the real world! I make animated and live-action commercial films so use both cinema cameras and 3d software and I loved the principles shared here as well as watching Andrew trip over some of the finer details of real world photography/film making 🤣
Despite this, I learned a lot from this and I know Andrew's scenes are infinitely more photo-real than anything I have created! lol
The same amount of light falls onto a camera sensor per unit size per exposure. There is no extra light 'forced' onto the sensor. The 'grain' increase is NOISE created through the amplification of the signal. Every camera sensor has what's called NATIVE ISO (Nikon Z7 for eg is 64) so everything after that is amplification. Digital sensors don't magically increase or decrease their pixel size hence the need for amplifying the signal.
Speaking of real film, the faster the film, ie. more sensitive, the larger the physical grain crystals on the emulsion to gather more light. That's why grain was more apparent with faster film.
I like how the mic clipped when he was talking about the light clipping in the attic in toy story.
Amazing talk, everyone wanting to get into this industry should start with this video. Does a great job at teaching basic principles and concepts that are very important to keep in your knowledge base at all times. He's basically saving you from making hundreds, if not thousands of mistakes. It took me 5 years+ to learn these things on my own, you got it in 24mins.
edit: If you took every topic in this video and went off to study them in depth, you could very quickly become proficient at this.
QUESTION: when he talked about how materials are divided into non-metallic and metallic, I was wondering what’s about glass? What property does it hold compare to the others?
Such a great talk from Andrew!!!
Watched Toy Story 4 to see how good the photorealistic models were and now I am crying my eyes out.
That was the perfect length for that talk, great stuff!!
One humble correction: the "0.5" setting means "180-degree shutter angle", which gives you the cinema standard motion blur. It's not random.
RSMB uses the same notation.
"which gives you the cinema standard motion blur" - only if you're rendering at 24 fps, right? There's no sane reason you'd want exactly 0.5 / 180 degree shutter if you're rendering at more common digital framerates such as 30 and 60 fps. The motion blur would NOT look like what people are used to from movies, so what would be the point of using that arbitrary setting?
@@forasago No, Forasago, a 180-degree shutter angle at 60fps and 30fps would give you shutter speeds of 1/120 and 1/60 respectively, which means it would still give the cinema standard motion blur, but the higher framerates would give you a non-standard sense of motion / the feel would still be cinematically "wrong" or TV-esque or video game-esque (unless you play them back in slow-motion at 24fps).
I've performed these experiments many ways in AE, using my GH2 and Reelsmart Motion Blur.
@@EdNorty "No, and here's why you're right."
If you're going to agree with me (that 180 degree shutter speed will NOT feel like a movie at any framerate other than 24 fps) why start with a No? I don't see a reason why anyone would use 180 degree shutter at non-movie framerates. Do you? For games it actually seems more reasonable to use a 360 degree shutter speed. After all what we're really after is eliminating the feeling that there's something missing / being skipped. A 24 fps movie is already very blurry with 180 degrees and I would guess there are also technical limitations preventing the shutter from being open the entire time. But game engines don't have this limitation, and higher framerates end up looking very sharp even when you crank the blur up. I have tested 360 degrees in Unity (at 144 fps locked on a 144 Hz display) and it looks nice.
@@forasago Because I'm not agreeing with you. I'm saying there's a distinction between the motion blur and the felt sense of motion.
If you're, for example, shooting a telenovela at 60fps, the 180-degree shutter angle standard would mean you will use 1/120. If it's a Hobbit movie at 48fps, shooting at the 180-degree shutter angle standard would mean it's 1/96.
Anything higher than the 180-degree angle, it gives you a jittery action-scene look and anything lower, a dreamy echo-ey look, regardless of whatever frame rate you're shooting at.
But regardless of using the 180-degree angle, if you're not shooting 24fps, it still wouldn't give you the traditional feel.
TL,DR:
If someone shot a RUclips video at 60fps and used 1/5000 for some reason, it'll just look worse than if it had been shot at 1/120.
@@EdNorty "Anything higher than the 180-degree angle, it gives you a jittery action-scene look and anything lower, a dreamy echo-ey look, regardless of whatever frame rate you're shooting at."
This is obviously false. You always have less blur at higher framerates since less movement is actually happening per frame. Just imagine you're recording at 1000 fps, there would be no blur to perceive at all, no matter the shutter speed.
The king that taught me donut, PBR and lighting.. BlenderGuru!
Great presentation, Andrew!
20:01 I think that "Shutter" value is equivalent to what they call "shutter angle", just measuring the angle in full rotations instead of usual degrees. You can imagine it as representing what percentage of a disc that spins once per frame placed in front of the image sensor of the camera, is cut out to allow light in. In other words, it's equivalent to exposure time, but using units proportional to the frame-time (1 / frame-rate; in other words, the higher the frame rate, the smaller the exposure time is).
Thank you Andrew, very useful and informative. I've got a lot of information from it. Thank you again🥰🥰
I'm just here to see that handsome man. The cowboy also looks pretty realistic.
5:53 Sneaky advertising, Andrew
Xd
So you're tellin' me the secrets to photorealism are all based in understanding photography. Interesting!
It's very generous of Andrew to grace us with his wisdom whenever he's not too busy shilling NFTs and complaining about how he can't say slurs anymore.
Who care
18:50 what he's describing is a lot like how LightWave did depth of field back in 1990, it would just render the same shot from several slightly different viewpoints centered around the focal point and averaged the pixels so that items at the focal point are sharp but the further details got the more they were blurred. IIRC it only did 4 or 8 or so shots, it doesn't exactly create nice bokeh.
He is so good at explaining 3D software, he should start making tutorials
this gotta be a joke or..
"Leave specular at 0.5" is just plain wrong. It's right for a theoretical perfect surface, which makes it a bit funny as he mentioned porous surfaces - those are not theoretical perfect anymore 😀Some of those specular reflections will end up getting absorbed by the material and never show up as super rough reflections at all; there is a loss of specular energy that will not happen with specular remaining at 0.5. Disney calls this slider an "artistic value" - if the reference material look less reflective (as a whole or in microscopic spots), just lower the specular. Similarly, although not necessarily much applicable to everyday materials, some can have a coating on them that raises the reflectivity. Just as you can have an anti reflective coating on glasses. And specular set to 0? I use this for shadow gaps, i.e. in the cracks between wooden floor boards that will remove specular reflections completely rather than do a very rough reflection.
Also, refraction is not absorption. Absorption is the correct umbrella term (vs reflection), and refraction *can* be one of the outcomes but doesn't have to be. An object either reflects or absorbs based on the chance of bouncing off the surface, and refraction is one of the possibilities that can happen as part of absorption. The unity/unreal (?) talk video ages ago describes this far better.
Not only metals can reflect non whites. I know what he's getting at, but it's not technically a correct statement. For game stuff, yeah, absolutely true that usually this is good enough for dielectrics. But there are reasons metallic workflow (which only allows colored reflections for metals) is known to have this limitation. Principled has a specular tint that can allow for inter-reflections to pick up base color, but even this is not sufficient to cover all cases. Some of these tints are due to special optical effects, like thinfilm, iridescence, naturally occurring pearlescence, opalescence and so on. So *not always* white.
Kudos for getting bump distance more properly set during Suzanne demo, most wrongly just reduce bump effect. Should have been mentioned though. Others have covered shutter speed.
I'll just throw in that for better control, you can use cryptomatte/material id/object id to control various levels of denoising and glare effects.
This is all just nitpicking though. Great presentation overall and a thumbs up from me.
Hmm, I haven't seen Toy Story 4 yet. Damn, that's some fantastic looking anamorphic bokeh.
I actually agree with this. If you're trying to match reference and the roughness value doesn't make it as rough or smooth as it should, *then* change specular value (since it's probably an edge case). Just don't start with it. As disney says, it should be mostly used as an artistic slider.
Your shadow gaps description doesn't make any sense in a physically based workflow. I'm sure it works just fine, but Andrew is, I would assume, trying to describe a more physically correct workflow. For most materials, specular should remain at 0.5. Surfaces that are, for example, porous are the exception to the rule.
@@spaceman-pe5je As for shadow gaps, check the thumbnail for the video "Two FREE and Cheap Ways to Fix Gaps In Your Floor". There is (also specular) energy loss as a result of geometry, geometry that isn't there when we fake it with a flat plane. Somewhat in the connecting gap, and completely in the butt gap. I can also create an actual surface with microgeometry (say tiny bent microtubes like in rubber) that will loose energy that cannot be replicated with using specular 0.5, even if the theoretical value is 0.5. Physically based is not the same as physically correct, it doesn't account for everything observed. Even just using normal maps on a flat plane don't actually shadowing and masking properly like using microdisplacement would.
The takeaway is observe what is actually happening and USE the tools at hand to get that effect, rather than insist on using theoretical values from some IOR table. Using slightly wrong IOR (controls F0 reflectivity, always around 4%, say plus minus 1%) for a fully absorbent material isn't going to be what makes or brakes the render. Not accounting for shadow gaps and actual observed energy loss will.
As for model every plank to incorporate shadow gaps the correct way? This is not something I can afford in my work (big office venues).
This seems relevant: does it ever make sense to have a partial metallicity?
@@ShankarSivarajan Other than to smooth out sharp transitions and for for antialiasing purposes, not really. In theory. Of course, anything is legal for artistic choices, so...
I'm not working with Blender and I got zero knowledge about it but this video has been good education for 2D Art as well!
Damn bro, Blender Foundation made a realistic dude talking about realism in blender for an hour.
So good, to go back for the principles...... Nice talk !.
Pleasure meeting you at the Lightbox Expo brother. Very informative presentation.
I love it! Special greetings from Sofia, Bulgaria.
Very nice explanation thankyou so much 🙏🏼🙏🏼🙏🏼
You are an amazing speaker and you get me really interested in these topics even though I don't have a lot if knowledge in 3d animation
For the 'simulating clipping' part, that is probably correct for simulating standard/older photography but is far less so with modern high dynamic range cameras and HDR video. So you would need to be very clear on the simulated limitations you are choosing.
he explains this very well, he should start educating and teaching things like this on youtube.
Not only a good video for beginners but also has reminders for the more advanced user. Mind though, that photorealism is a choice and not always a must. Sometimes photorealism is not a good idea, think about the uncanny valley or about production time. I was always interested in photorealism until I recently realized these two downsides. You can never make it realistic enough, and if you try it, it's very time-consuming (to come to a production). Photorealism is a good idea when your client wants a photo-realistic representation of his product in the ad, or as a hobby for example. I think many of us focus too much on photorealism by default.
I literally love everything this man says.
I'll be back after watching toy story 4 :D Thanks anyway Andrew!
EDIT: Darn, couldn't wait. This was brilliant!
I once started making a donut in 3D.
Thank you for your good presentation.
This is gem for 3D artists!
As a photographer/videographer, my guess as to what the value for shutter speed would be is the general rule for video of shooting one over double your frame rate. So if you're shooting at 24 FPS (Movie standard), then your shutter is 1/50 of a second. So in essence, your frame rate is 0.5 of the shutter value, which is the default on Blender.
It's confusing that Moonlight is around 4,000K (Normal White). But in the Kelvin scale, it's a bit more in the left warm light (Tungsten) area while a typical blueish Moonlight is more in the right side (Daylight) area.
mr Andrew fisher price AMAZING TALK! as always as far as i can tell
Thank you for this video. I don't really use Blender, but the concepts all apply to iClone.
almost 1m congrast
What an amazing concise tutorial! Thank you!
here is the blender guru dropping some fax
out with your notes kids
I really wanted to hear andrew talk about a new topic like ai or at least not about something he’s already talked about in his RUclips channel. Good talk anyway!
Awesome, as a new learner, the info are useful💪
The Donut Guy behind my starting journey into blender ❤❤