Even at 2x playback speed, the video's four minutes old and 18 minutes long. Still, I'm sure it's a great video, but I couldn't definitively state that for at least another 14 minutes.
I wish you used more modern, good idioms. For example, use std::array instead on C-arrays. For the camera data, define OrthographicData and PerspectiveData separately and store them as data member in SceneCamera, to improve encapsulation. These are things that may look hard, but would actually help beginners to familiarize with good idioms, instead of learning bad idioms only to have to forget them and re-learn the good ones. I (and most people, unfortunately) was taught C as first programming language at university, and the transition to C++ (basically forget every C idiom ever) is painful.
you could replace your string array to an inline function something like GetProjectionTypeName(EProjectionType projectionType) that would enter a switch based on the specified projection type and return a name
Great series! I hope this comment is taken as constructive. Instead of checking for the types with if conditionals, why not use a C++17 ADT? Then you could use one of the pattern matching libraries and clean up the domain logic. Pattern matching is coming to C++23, might as well jump on the bandwagon now :D
I'll give you a nudge, but an in-depth video on this topic would be a great idea at some point in the series. With GLFW, you can register a callback for when the window is resized (glfwSetWindowSizeCallback). All you really need to do there is recalculate the view and projection matrices to reflect the new window client-area aspect ratio. UI layers are a bit more difficult, though. However, Imgui is already setup to deal with it. Absent Imgui or another similar external UI library, and you likely will end up programming dozens of widgets capable of determining their min/max/preferred sizes and letting them resize themselves with help from a UI layout layer. It gets tricky though, just look at any existing widget libraries out there, and many still get the edge cases wrong!
When fullscreen, game UI should be a fraction of the screen, which is what the API actually expects already. There are more details here on font rendering and pixel alignment that are a bit too complicated to describe here. When in a window that resizes, like a regular desktop app, UI should be in pixels, and recompute whenever the view resizes. This is quite hard to implement well, but if you want to give it a try start with the rough model of a tree of nodes with an interface of measure(), which returns the size it needs, which only depends on the internal state and children measure results; and layout(), which receives a placement rectangle and which should only call measure() or layout() on it's children and/or store the rectangle, and render(), which uses the last placement rectangle and current state. You will need to fiddle with this a bit to get things like text wrapping or image aspect ratios to work. Then all the external system needs is to call layout() on the root node with the window size on startup and resize to place everything. The initial real work is creating all the layout nodes that dock and stack things, and the leaf nodes that render text, filll, borders, etc..., clipping children for scrolling, then handling input, which has a lot of different options (I'm actually not a fan of dispatching input into the display tree, but that's a different story)
Can you also tell us what modern approaches are currently available for drawing text? I know about Signed Distance Field, but I don't think that modern game engines use It for text rendering
@@icosider Most games license a commercial engine for all of this, I'm not sure what the state of art is. There aren't too many approaches I've seen though: Pretty sure it's going to be instanced quads from a dynamically generated atlas for the exact font size for anything view aligned. Possibly heavy use of render to texture to get the fancy effects? From my understanding SDF is more for in-world text or animated, where you can't pre-render to a specific size, though it can also implement effects.
can you elaborate why you use the C style cast `(int)type` instead of `static_cast(type)` and why you use const char* instead of std::string? I guess the raw pointer is due to ImGui?
In C++ using a C style cast ( "(int)type" ) is syntaxic sugar to "attempt" several types of C++ casts until one works. The first one it attempts is the static cast, so if you're certain that cast will work no matter what, it's essentially the same.
@@CoinedBeatz Its good formatting practice because it shows readers you *know* something will pass the static_cast, but performance wise they are no different
The cherno have you implement box 2d in hazel cuz with the camera entity and scene hiearchy and properties panel built into the engine if you put box 2d in it i would really love to see you make the farty rocket clone with this new version of hazel
I want to ask something how can you compile the hazel public or hazel dev (i think theyre just the same except for some things) code and make it into an application so that i dont have to run the code every time
Hey Cherno ! First time I leave a comment on a vid, I have a question for you : I'm a Unity game dev by trade and I want to get good at C++ and game engine creation. I'm currently reading a book called "Game Engine Architecture" by Jason Gregory. Would you recommend any other books for me ? Or perhaps sourcecodes I should look at ? Thanks !
Thanks for your great work! But I have come across an issue where 2 textured entities will not render properly when tweaking camera's z position in perspective mode... Any ideas?
Doesnt really make any sense to me why have a Camera Component? I mean a camera is a rendering thing, you do occulusion and everything based on the camera, its not really something you would compose onto an Entity. What does it even mean for an Entity to have a Camera Component? How does that affect the culling pass, or cases where you have multiple viewports or things like portals?
Depends on what's your definition of an entity... Typically, cameras also need a transform (to compute the view matrix), and this transform may be either global or local (and further depend on scene hierarchy information). If transforms and hierarchy nodes already exist as components in your engine, then a camera can be regarded as the aggregation of a say, a transform, a hierarchy node and projection information. So a camera is *itself* a perfectly valid entity. However, Cherno chose to call the projection information component a "camera component", and I think this is what's throwing you off here. This approach pays off, if later on you need to customize even more the camera behavior, to make a shaky camera for example, then you create a shaky component and devise a system to update the camera's transform based on this component. Then making a camera shaky is easy : you don't need to switch implementation or whatever, you just add a new component to the camera entity.
@@tzimmermann that just sounds like a waste though. When will you ever need to add a Shaky component to something that isnt a camera? Now youre iterating through all cameras with a Shaky component every frame, or your Shaky System will have to know about the Camera component like is this camera active, is its viewport visible to the player so you can optimise out the logic if its not visible, which potentially you have multiple cameras but theyre not all being shaky at the same time so youre wasting a step to iterate on things that dont need to shake. Whereas if you put the shaky logic on the Camera itself, the specific Camera will shake when it needs to. Its also extra stage of iterating through component systems, for something that is isolated to a very unique entity type. Think about it - when will you need to add a UV Component OR a Shader Component OR a Texture Compoment seperately? These things all are just a part of Materials, you never need to add them as individual components because they wont do anything on their own, just have a single Material and even then, your Material is just a part of a Model, you will never need to have a Material Component by itself.
@@Terszel Honestly that's just an example I made up without giving it much thought (I don't have shaky cams in my engine at the moment). That's what I would think of at first, as I tend to implement behaviors as single-responsibility systems. I use EnTT too for the ECS. Remember your systems can iterate over views (or groups) and not just single component types, so in your ShakySystem you can trivially iterate over cameras that have a ShakyComponent AND a TransformComponent AND a ProjectionComponent, and this view is compile-time... And EnTT is designed to make this kind of traversal very fast. And the logic is optimized out if your view is empty (no shaky cam). It avoids branching too much during the update. That's basically existence based processing. Then, I don't really like talking about performance if I haven't measured it. I doubt such systems could bottleneck the CPU, but if some day, my profiling data tell me otherwise, I should act accordingly. Also, we're talking about developing an engine, not a game. Some games don't even need a shaky cam, so it might be a valid decision not to provide the shaky behavior engine-side. If you have an ECS, you (hopefully) already provide a way for your user to pass custom data / functionality through an API. If you don't implement cameras as entities, then I assume you'd need yet another API for the user to be able to setup a shaky cam... That being said, I think my point remains valid as to whether you should reuse already implemented components (like the transform component) as part of your camera system. More than half of what you need to implement a camera should already exist in the form of a component, so why would you do the work twice?
I think someday I'll make a game on top of my engine _wink_
So do I. I want to make a vampire game using my VampEngine!
the cherno is nice
Great video
Suic you could not possibly seen the full video. How could you know if its a good video?
Even at 2x playback speed, the video's four minutes old and 18 minutes long. Still, I'm sure it's a great video, but I couldn't definitively state that for at least another 14 minutes.
@@amund8821 i just assumed, im still watching it, but all of his previous videos were great, so.
My man's playing the video at 36x speed!
I wish you used more modern, good idioms.
For example, use std::array instead on C-arrays.
For the camera data, define OrthographicData and PerspectiveData separately and store them as data member in SceneCamera, to improve encapsulation.
These are things that may look hard, but would actually help beginners to familiarize with good idioms, instead of learning bad idioms only to have to forget them and re-learn the good ones.
I (and most people, unfortunately) was taught C as first programming language at university, and the transition to C++ (basically forget every C idiom ever) is painful.
i love the cherno
thanks!
great vid
you could replace your string array to an inline function something like GetProjectionTypeName(EProjectionType projectionType) that would enter a switch based on the specified projection type and return a name
Great series! I hope this comment is taken as constructive. Instead of checking for the types with if conditionals, why not use a C++17 ADT? Then you could use one of the pattern matching libraries and clean up the domain logic. Pattern matching is coming to C++23, might as well jump on the bandwagon now :D
the cherno
@The Cherno, can you tell us how to make a UI for different screen resolutions, as well as for a resizable window of app?
I'll give you a nudge, but an in-depth video on this topic would be a great idea at some point in the series.
With GLFW, you can register a callback for when the window is resized (glfwSetWindowSizeCallback). All you really need to do there is recalculate the view and projection matrices to reflect the new window client-area aspect ratio.
UI layers are a bit more difficult, though. However, Imgui is already setup to deal with it. Absent Imgui or another similar external UI library, and you likely will end up programming dozens of widgets capable of determining their min/max/preferred sizes and letting them resize themselves with help from a UI layout layer. It gets tricky though, just look at any existing widget libraries out there, and many still get the edge cases wrong!
When fullscreen, game UI should be a fraction of the screen, which is what the API actually expects already. There are more details here on font rendering and pixel alignment that are a bit too complicated to describe here.
When in a window that resizes, like a regular desktop app, UI should be in pixels, and recompute whenever the view resizes. This is quite hard to implement well, but if you want to give it a try start with the rough model of a tree of nodes with an interface of measure(), which returns the size it needs, which only depends on the internal state and children measure results; and layout(), which receives a placement rectangle and which should only call measure() or layout() on it's children and/or store the rectangle, and render(), which uses the last placement rectangle and current state. You will need to fiddle with this a bit to get things like text wrapping or image aspect ratios to work.
Then all the external system needs is to call layout() on the root node with the window size on startup and resize to place everything. The initial real work is creating all the layout nodes that dock and stack things, and the leaf nodes that render text, filll, borders, etc..., clipping children for scrolling, then handling input, which has a lot of different options (I'm actually not a fan of dispatching input into the display tree, but that's a different story)
Can you also tell us what modern approaches are currently available for drawing text? I know about Signed Distance Field, but I don't think that modern game engines use It for text rendering
@@icosider Most games license a commercial engine for all of this, I'm not sure what the state of art is. There aren't too many approaches I've seen though: Pretty sure it's going to be instanced quads from a dynamically generated atlas for the exact font size for anything view aligned. Possibly heavy use of render to texture to get the fancy effects? From my understanding SDF is more for in-world text or animated, where you can't pre-render to a specific size, though it can also implement effects.
@@SimonBuchanNz i can use hiero by libgdx for pre-render glyphs to a specific size🤔
can you elaborate why you use the C style cast `(int)type` instead of `static_cast(type)`
and why you use const char* instead of std::string? I guess the raw pointer is due to ImGui?
In C++ using a C style cast ( "(int)type" ) is syntaxic sugar to "attempt" several types of C++ casts until one works. The first one it attempts is the static cast, so if you're certain that cast will work no matter what, it's essentially the same.
@@Marcus615 Alright, I just thought using the explicit cast was considered good practice
@@CoinedBeatz Some people just think static_cast is ugly.
@@CoinedBeatz Its good formatting practice because it shows readers you *know* something will pass the static_cast, but performance wise they are no different
The cherno have you implement box 2d in hazel cuz with the camera entity and scene hiearchy and properties panel built into the engine if you put box 2d in it i would really love to see you make the farty rocket clone with this new version of hazel
It would be a year since the farty rocket video release. Would be cool to see a remake with all those new changes.
Do you still have the new office? I know current situations mean you might not be able to go to it but just wondered.
Why to uppercase letter after underscore ?
I want to ask something how can you compile the hazel public or hazel dev (i think theyre just the same except for some things) code and make it into an application so that i dont have to run the code every time
there should be an executable in the bin folder somewhere, you may need to copy the shader files to get it run run though
@@FlukierJupiter thank you
Hey Cherno ! First time I leave a comment on a vid, I have a question for you : I'm a Unity game dev by trade and I want to get good at C++ and game engine creation. I'm currently reading a book called "Game Engine Architecture" by Jason Gregory. Would you recommend any other books for me ? Or perhaps sourcecodes I should look at ? Thanks !
Bump
Handmade Hero is awesome. Follow along with the first 30 or so days and you'll learn a ton!
@@williamstech9314 "Handmade Hero" ? I'll look that up thanks =)
@@williamstech9314 i also recommend
Can you one day do a typing test and do like a lil announcement of your typing speed?
Thanks for your great work! But I have come across an issue where 2 textured entities will not render properly when tweaking camera's z position in perspective mode... Any ideas?
What color theme do you use?
How do you setup your task bar to look like that?
Why can't I assist the recorded lives? It is a common issue?
the cherno is cool
the cherno i s helpful
the chern
Which colorscheme are you using
Doesnt really make any sense to me why have a Camera Component? I mean a camera is a rendering thing, you do occulusion and everything based on the camera, its not really something you would compose onto an Entity. What does it even mean for an Entity to have a Camera Component? How does that affect the culling pass, or cases where you have multiple viewports or things like portals?
Think about it in different way in the game you can have a few cameras to make dialogue more cinematic and only switch between them.
@@DrockGaming That still doesnt make sense as to why its a component.
Depends on what's your definition of an entity... Typically, cameras also need a transform (to compute the view matrix), and this transform may be either global or local (and further depend on scene hierarchy information). If transforms and hierarchy nodes already exist as components in your engine, then a camera can be regarded as the aggregation of a say, a transform, a hierarchy node and projection information. So a camera is *itself* a perfectly valid entity. However, Cherno chose to call the projection information component a "camera component", and I think this is what's throwing you off here.
This approach pays off, if later on you need to customize even more the camera behavior, to make a shaky camera for example, then you create a shaky component and devise a system to update the camera's transform based on this component. Then making a camera shaky is easy : you don't need to switch implementation or whatever, you just add a new component to the camera entity.
@@tzimmermann that just sounds like a waste though. When will you ever need to add a Shaky component to something that isnt a camera? Now youre iterating through all cameras with a Shaky component every frame, or your Shaky System will have to know about the Camera component like is this camera active, is its viewport visible to the player so you can optimise out the logic if its not visible, which potentially you have multiple cameras but theyre not all being shaky at the same time so youre wasting a step to iterate on things that dont need to shake. Whereas if you put the shaky logic on the Camera itself, the specific Camera will shake when it needs to. Its also extra stage of iterating through component systems, for something that is isolated to a very unique entity type.
Think about it - when will you need to add a UV Component OR a Shader Component OR a Texture Compoment seperately? These things all are just a part of Materials, you never need to add them as individual components because they wont do anything on their own, just have a single Material and even then, your Material is just a part of a Model, you will never need to have a Material Component by itself.
@@Terszel Honestly that's just an example I made up without giving it much thought (I don't have shaky cams in my engine at the moment). That's what I would think of at first, as I tend to implement behaviors as single-responsibility systems. I use EnTT too for the ECS. Remember your systems can iterate over views (or groups) and not just single component types, so in your ShakySystem you can trivially iterate over cameras that have a ShakyComponent AND a TransformComponent AND a ProjectionComponent, and this view is compile-time... And EnTT is designed to make this kind of traversal very fast. And the logic is optimized out if your view is empty (no shaky cam). It avoids branching too much during the update. That's basically existence based processing.
Then, I don't really like talking about performance if I haven't measured it. I doubt such systems could bottleneck the CPU, but if some day, my profiling data tell me otherwise, I should act accordingly.
Also, we're talking about developing an engine, not a game. Some games don't even need a shaky cam, so it might be a valid decision not to provide the shaky behavior engine-side. If you have an ECS, you (hopefully) already provide a way for your user to pass custom data / functionality through an API. If you don't implement cameras as entities, then I assume you'd need yet another API for the user to be able to setup a shaky cam...
That being said, I think my point remains valid as to whether you should reuse already implemented components (like the transform component) as part of your camera system. More than half of what you need to implement a camera should already exist in the form of a component, so why would you do the work twice?
i love the cherno
the cherno
i love the cherno
i love the cherno
the cherno
the cherno
the cherno