Yeah the constant interruptions were fairly distracting. Does anyone know if this was labelled as more of an interactive presentation or was this person being kind of rude and interrupting all the time? I feel like unless the presenters asks if there are any questions, hold questions until the end, especially if they aren't really relevant to the topic being discussed at that moment. (I think you could figure out what 'push' does and regardless, it's not relevant to C++ 11 in games regarding what he was talking about)
I was experiencing this kind of person once. I learned much later that he has mental illness (Asperger Syndrome). I don't mean the guy in the video, though.
I was in that class! I would say about 5% of the time he raised his hand he had an actual question, and the other 95% of the time he just wanted to show how smart he was.
55:30 one can force compile-time execution of constexpr functions by wrapping them in an integral_constant. For example using #define FORCE_COMPILETIME(e) (std::integral_constant::value) Thats what i am using to use constexpr string hashes in switch statements.
Michael Pohoreski I too am not the biggest fan of the macro method. it can be hard to debug and traversing layers of macros is not fun. preprocessing as you mentioned could be a more debug able way, but for this project the overhead to create the system to do all that generation was prohibited by time.
this method also got me the compile time versioning I wanted in a way that would be exactly what the compile created .... preprocessing could leave you cases where the data is not always updated (though they could be rare)
Compile time asset processing is like the exact opposite of the kind of change I want to see in games development. We should be making it easier for end users to make modifications, which both increases target audience and expected life time for any game, and can provide even more benefits including free community patches and the like. Instead we are to _design them out_ entirely, instead of providing better error messages to our artists, or to squeeze another few moments out of our load time?
He's saying he's using a unique_ptr to have automatic cleanup if there's a crush, but he also says he's not catching exceptions... so if there's a crush, the whole process is killed anyway and unique_ptr doesn't matter at all, doesn't it? It only helps him to forget about it if the user quits the game normally... Or am I missing something?
the unique _ptr buys the ability to have entire game states go out of scope and not have the memory leak. for a task queue it would allow you to reset the queue without having to delete each remaining task.
I guess you could run tasks every frame by placing them in the queue but never completing them. Also networking error handling could "not complete" the task to retry next iteration
Actually, this is pretty much an industry standard since early 2000s. Proof of concept? The fact that Oblivion, Fallout 3, Skyrim and Fallout 4 all have an instance of the same physics bug (an entity will randomly fly into orbit, then fall back down to the ground for seemingly no reason. Another one is sometimes an entity will "string out" or stretch to rediculous proportions and roll out of the scene... this one happens a lot less than the Entity Space Program issue), DESPITE using newer versions or different physics engines. It's not game breaking, its not throwing a major exception, and they have "turned off" all the minor exception handling. They may not even KNOW this is going on without us telling them its going on.
Using a struct for only a spawn point is a little bit odd, especially for a large C++ project. Maybe it would make sense if the scope was small, like a private C# struct, but polluting your C++ namespace isn't a good idea in my opinion.
@28:00, his code is really bad. When there aren’t tasks to execute, his code will wake up each thread in the pool 333 times each second. This prevents CPU sleep states, meaning his code is going to burn battery on mobiles, and will keep CPU hot on all platforms. And when a task is posted to the pool, his code brings up to 3ms extra latency for absolutely no reason.The right approach is use some better queue, that can block consumer threads on some synchronization primitive, e.g. std::condition_variable, until more tasks are available. Or on Windows, there’s a built-in queue, I/O completion port.
Konstantin X you are correct that the thread sleep is not the best idea. in a quick iteration the example now uses a condition_variable to wait for tasks as needed, but as mentioned games usually have tasks running just about all the time.
On a fast enough desktop PC, a well-designed game won’t have tasks running all the time. We don’t want to compute physics or AI more often than once per frame. FPS is typically limited to 60Hz, unless user knows what she is doing and switches off VSync. Even when that’s not the case, and the game indeed have tasks running all the time, that doesn’t means each CPU core have tasks running all the time as well. In most games I worked on, some form of thread synchronization was still used, to merge data from background threads to the rendering thread.
Well, multi thread won't "solve things all at once", because the processor is serial. Maybe if you could put a thread at each core, things could be different.
What do you mean? There are context switches, since there are more threads\processes than logical cores, but some threads really do run concurrently some of the time...
MrAbrazildo the example uses a thread pool of threads that equal one per core so as each core executes multiple things can be worked on at once. the context switching that is referenced is about the caching of data for each core. if one core is processing a pod item it is loaded into cache then processed. if the next pod item being processed is not able to be precached by the processor then the processor will start all over and pull the pod item that is needed into its cache. performance can be gained by keeping like data together in contiguous memory and processing it on the same core so the caching and prefetch help you. this example is just generic and does not try to be smart about task organization. the talk mentions this in reply to one of the questions.
***** Then why muiti thread is never mentioned in videos about performance? Not even in that "C++ in Huge AAA games", which showed several smart tricks. Neither Carruth's LLVM compiler seems to create multi threads.
The year is 2016. Tired of his company using pre-98 C++, an ambitious game dev decides to create a game engine using C++14 and the latest and greatest to avoid the mistakes of his employer. Wait, what? He used C++11 instead of C++14? Is that yet another macro-based reflection system (19:00)? He doesn't want to use Vulkan/DX12 because it's "too new" (34:00)? He thinks it's fine to assume we can safely use the default destructor based solely on a forward definition (56:50)? Whaaaa? Why does this feel like one of those "I made a Minecraft clone in one day" videos?
The one thing I hate about game devs is what he said about game quality "if 1% of 1% has issues, sorry". I get it but try to strive for 100%. Why settle for less?
Because you can't "waste" 1 year of game development to fix bugs that only occur under specific circustances. If the bug isn't game-breaking or only a very small percentage of people has a problem, it's simple not a priority. It's not like they don't try to fix bugs whenever they can, but if it's not a priority, they can't focus on that.
I think what he meant is that you don't actually search for those issues, even though you should obviously fix them if reported to the bug tracker (which still not all games have unfortunately). This means you fix them once reported, but don't waste time searching for them before the game is released
ah, remember when PC and Mac were both x86_64? good times.
And both were good enough when using OpenGL
Good talk, I always enjoy the "game engine" talks :)
haha, he looks slightly annoyed at the people (same person?) not really asking questions, but wanting to show off their own knowlegde of c++.
The whole time I was watching it I was like "This guy needs to shut the f*** up...".
Yeah the constant interruptions were fairly distracting. Does anyone know if this was labelled as more of an interactive presentation or was this person being kind of rude and interrupting all the time? I feel like unless the presenters asks if there are any questions, hold questions until the end, especially if they aren't really relevant to the topic being discussed at that moment. (I think you could figure out what 'push' does and regardless, it's not relevant to C++ 11 in games regarding what he was talking about)
I was experiencing this kind of person once. I learned much later that he has mental illness (Asperger Syndrome).
I don't mean the guy in the video, though.
I was in that class! I would say about 5% of the time he raised his hand he had an actual question, and the other 95% of the time he just wanted to show how smart he was.
He should be blacklisted :P
I also use C++11 (and boost) heavily in game development - love it! Good talk.
55:30 one can force compile-time execution of constexpr functions by wrapping them in an integral_constant. For example using
#define FORCE_COMPILETIME(e) (std::integral_constant::value)
Thats what i am using to use constexpr string hashes in switch statements.
this guy really likes to leverage
48:40 "resource sterilization"... so these pesky resources don't start to multiply and things get out of hand :D
@19:36 *sigh* Yet-Another-Reflection-System implemented via macros because the language doesn't offer it ... yet.
Custom pre-processing step.
Some people see this as even worse then using macros. Even if it is as good and simple as Qt.
Michael Pohoreski I too am not the biggest fan of the macro method. it can be hard to debug and traversing layers of macros is not fun. preprocessing as you mentioned could be a more debug able way, but for this project the overhead to create the system to do all that generation was prohibited by time.
this method also got me the compile time versioning I wanted in a way that would be exactly what the compile created .... preprocessing could leave you cases where the data is not always updated (though they could be rare)
Compile time asset processing is like the exact opposite of the kind of change I want to see in games development. We should be making it easier for end users to make modifications, which both increases target audience and expected life time for any game, and can provide even more benefits including free community patches and the like. Instead we are to _design them out_ entirely, instead of providing better error messages to our artists, or to squeeze another few moments out of our load time?
Implement compile time processing for base game assets and dynamic for mods, only negative is that you have to maintain both
@@jan-lukas ... so just make everything dynamic from the start and have no extra issues to deal with then.
I disagree, doing this at compile time allows things to be considerably more performant
@@FireflyInTheDuskand then pay a performance cost at runtime....no thanks
5:10 Chandler’s talk this morning:
ruclips.net/video/vElZc6zSIXM/видео.html
With this task system, how are tasks interrupted?
anyone knows which Chandler's talk he was referencing?
isn't make_unique a c++14 feature?
He's saying he's using a unique_ptr to have automatic cleanup if there's a crush, but he also says he's not catching exceptions... so if there's a crush, the whole process is killed anyway and unique_ptr doesn't matter at all, doesn't it? It only helps him to forget about it if the user quits the game normally... Or am I missing something?
the unique _ptr buys the ability to have entire game states go out of scope and not have the memory leak. for a task queue it would allow you to reset the queue without having to delete each remaining task.
Jason Jurecka I know but he specifically said it helps in the case of crashes..
MrWorshipMe I believe that was a misspoken thing as the example he game right after was resetting the game state back to the main menu.
Why are you talking in 3rd person when talking about yourself or are you not THE Jason Jurecka in the video.
Can someone help me find the video he's talking about around 5:40 ?
I think it's this one ruclips.net/video/vElZc6zSIXM/видео.html
What does it mean that a task is "not completed" and it is placed back into the queue?
I guess you could run tasks every frame by placing them in the queue but never completing them. Also networking error handling could "not complete" the task to retry next iteration
what theme is that for the code on the slides?
"We can patch to fix found things in the wild" Now i understand why Blizzard games are bugged
Actually, this is pretty much an industry standard since early 2000s. Proof of concept? The fact that Oblivion, Fallout 3, Skyrim and Fallout 4 all have an instance of the same physics bug (an entity will randomly fly into orbit, then fall back down to the ground for seemingly no reason. Another one is sometimes an entity will "string out" or stretch to rediculous proportions and roll out of the scene... this one happens a lot less than the Entity Space Program issue), DESPITE using newer versions or different physics engines. It's not game breaking, its not throwing a major exception, and they have "turned off" all the minor exception handling.
They may not even KNOW this is going on without us telling them its going on.
He said that as a negative point, not as something they did. Listen again.
Anyone know what color scheme is being used for the code in the slides?
I would love to know that too
I believe it is just the Dark theme in Visual Studio. This is what I use and I go in and adjust a couple colors.
Visual studio 2019 dark theme
Using a struct for only a spawn point is a little bit odd, especially for a large C++ project. Maybe it would make sense if the scope was small, like a private C# struct, but polluting your C++ namespace isn't a good idea in my opinion.
@28:00, his code is really bad.
When there aren’t tasks to execute, his code will wake up each thread in the pool 333 times each second. This prevents CPU sleep states, meaning his code is going to burn battery on mobiles, and will keep CPU hot on all platforms.
And when a task is posted to the pool, his code brings up to 3ms extra latency for absolutely no reason.The right approach is use some better queue, that can block consumer threads on some synchronization primitive, e.g. std::condition_variable, until more tasks are available. Or on Windows, there’s a built-in queue, I/O completion port.
Here's some basic example of such queue (not mine, untested): github.com/juanchopanza/cppblog/blob/master/Concurrency/Queue/Queue.h
it is a game, there are always tasks to execute, CPU usually does not sleep when you play a game :)
+catbertsis exaclty what I was thinking. We're talking about a program that will always be busy anyway
Konstantin X you are correct that the thread sleep is not the best idea. in a quick iteration the example now uses a condition_variable to wait for tasks as needed, but as mentioned games usually have tasks running just about all the time.
On a fast enough desktop PC, a well-designed game won’t have tasks running all the time. We don’t want to compute physics or AI more often than once per frame. FPS is typically limited to 60Hz, unless user knows what she is doing and switches off VSync.
Even when that’s not the case, and the game indeed have tasks running all the time, that doesn’t means each CPU core have tasks running all the time as well. In most games I worked on, some form of thread synchronization was still used, to merge data from background threads to the rendering thread.
15:50 BOOM!
CPP became unreadable monster
52:50 stick with =0 for pure, and use the word pure for 'immutable by default'
Well, multi thread won't "solve things all at once", because the processor is serial. Maybe if you could put a thread at each core, things could be different.
What do you mean? There are context switches, since there are more threads\processes than logical cores, but some threads really do run concurrently some of the time...
MrAbrazildo the example uses a thread pool of threads that equal one per core so as each core executes multiple things can be worked on at once. the context switching that is referenced is about the caching of data for each core. if one core is processing a pod item it is loaded into cache then processed. if the next pod item being processed is not able to be precached by the processor then the processor will start all over and pull the pod item that is needed into its cache. performance can be gained by keeping like data together in contiguous memory and processing it on the same core so the caching and prefetch help you. this example is just generic and does not try to be smart about task organization. the talk mentions this in reply to one of the questions.
Jason Jurecka So, in practice, does multi thread improve chance of performance boost? I don't see anybody talking about this.
***** I said multi thread, not multi core.
***** Then why muiti thread is never mentioned in videos about performance? Not even in that "C++ in Huge AAA games", which showed several smart tricks. Neither Carruth's LLVM compiler seems to create multi threads.
The year is 2016. Tired of his company using pre-98 C++, an ambitious game dev decides to create a game engine using C++14 and the latest and greatest to avoid the mistakes of his employer. Wait, what? He used C++11 instead of C++14? Is that yet another macro-based reflection system (19:00)? He doesn't want to use Vulkan/DX12 because it's "too new" (34:00)? He thinks it's fine to assume we can safely use the default destructor based solely on a forward definition (56:50)? Whaaaa? Why does this feel like one of those "I made a Minecraft clone in one day" videos?
You missed the part of his stated goals. You answered your own question about Vulkan if look at when it was released and device support.
The one thing I hate about game devs is what he said about game quality "if 1% of 1% has issues, sorry". I get it but try to strive for 100%. Why settle for less?
Because you can't "waste" 1 year of game development to fix bugs that only occur under specific circustances. If the bug isn't game-breaking or only a very small percentage of people has a problem, it's simple not a priority. It's not like they don't try to fix bugs whenever they can, but if it's not a priority, they can't focus on that.
I think what he meant is that you don't actually search for those issues, even though you should obviously fix them if reported to the bug tracker (which still not all games have unfortunately). This means you fix them once reported, but don't waste time searching for them before the game is released