Very nice video. Of course there's always more to test :) A few extras for next video. Unreal's version of ECS is called MassEntity system You can compile down to c++ in Unity by enabling the "Ill2cpp" option in build settings. You can also compile Unreal blueprints to cpp
A year ago I was deciding weather to go with unreal or unity I saw that video of yours too, in the end i chose unreal just because I had a good experience on c++. And i think it was a good choice after all.
I did use burst whenever I could so for the spawner I couldn't but for the cubes looping other cubes and all I did and I just used a entity for each with schedule parallel so I didn't need to get into jobs. If it can be improved then that'd be good but it won that test anyway haha The tests that dots failed in like physics and graphics, I had nothing to do there except spawn things and for physics change the velocity every 2 sec ( almost no cost ) so it's 99% the physics/graphics package. The only test that my code had an impact, dots won so even if I messed up it doesn't matter much.
@@MaxMakesGames Fair enough. I think in general the use of Jobs would be more advisable, considering that as far as I remember Burst is only able to literally optimize Jobs (you need to use an attribute for that, I didn't see it in the video, but I admit I don't use Unity months ago, so maybe some things have changed). Well, anyway, that cleared up, good job with the video.
@@diadetediotedio6918 Yea it has changed a lot. Now you can use [BurstCompatible] or something like that above the class and it will use burst in the entities for each unless you use .WithoutBurst(). It's a pretty cool system :)
Hi! You've done very cool research! Can you make the same test on physics but with "Tick Physics Async" enabled in UE5 project settings? It has to improve the performance.
I don't think that CPU usage is a good idicator. For games, low CPU usage is only useful for longer battery life on mobile devices. Unity's newest physics simulation is multithreaded, so high CPU utilization can be good for performance. It would be like comparing RAM usage. High RAM usage can also be a good thing. Ideally you would compare performance without any rendering with identical testing scenarios.
Nice work. Definitely a few improvements that could be made to isolate what exactly is causing the issues but at that point you'd really have to get into the territory of hyper-optimizing for engine specific features. In general I'm going to assume the much higher quality rendering and shaders in Unreal probably contributes more than anything to the lower framerates in most of those tests. There's also the consideration of things like instancing, batching, and render paths. A real hardware-limit-pusher would be to render many different meshes with many different materials, maybe even throw a variety of moving realtime lights in there just to see. It would certainly go a lot further to simulate a realistic scenario that would likely happen in-game. Also a side note, my personal experience with Unity when using animated meshes is that the Animator will bring most machines to their knees after only a few hundred instances in view. I'm not sure if you used the Animator or just a basic animation clip using the legacy system but it can make a huge difference in some cases. Granted, with those kinds of numbers any engine would be better off using some kind of LoD and switching to fixed-frame texture-baked animations that are done entirely on the GPU. Overall, this is still a valuable starting point for consideration. Thanks for taking the time to measure and document all of this stuff!!
I have switched over from Unity C# to Unreal C++. And although i do appreciate the work you have put in this - i will have to argue with most of the results besides the blueprint system. But on blueprint system - the only fair comparison would be to use Unity's visual scripting tools against it. Unreal engine is A BEAST with a very steep learning curve. Out of the box it comes very clunky performance heavy and a real slug when it comes to workflow. BUT! C++ is a lower level programming language and that gives you more performance and control. If you can't handle a fast car properly - you will hit a wall. From my experience - Working on a C++ project ( It's not the same as blueprints based project with C++ code in it ) - Things that would take me a serious questioning on if it's viable on Unity - Unreal get's done without a hitch. I think it's more than twice as performant. I would even say it's safe to say that more than twice. Physics - I can easily put like 5 - 10k balls ( More performance heavy than a box due to collider ) and drop them in a shaking giant box - it will handle it on like 30fps IN EDITOR with high scaling. I think what happened here is - the script you wrote was badly optimized and simply overloaded it. Graphics - no comment here. Unity DOTS - Check unreal ECS and then test it. I'm not sure if it works better than Unity Dots, but again - it's not fair to compare a feature, that requires extra steps to set up With defaults of Unreal although it has it's own version of it. I like Unity, but Unreal Destroys Unity in all of these categories. Downsides of Unreal - Much harder to learn, C++ requires a lot of extra work as it's lower level, and less learning material.
I am looking to make a marching cubes voxel mesh generator. Based on this, it seems like Unity is the better choice? Not sure if I understood correctly or not
Both could do it. If you want to push it far you'll probably want to use compute shaders anyway which both engine will be equal with ( it runs on the GPU so it doesn't change with each engine ). Nanite could help if you want to render meshes with a ton of voxels so I'd go with UE5 if I was you :)
FPS do affect CPU load. GeForce Experience sucks, you should use RivaTunner statistics server (comes with MSI afterburner) you are able to see all data you need, GPU usage, temp, VRAM usage; CPU cores usage, temp; RAM usage; FPS, Frametimes (FPS time-graph). These tests were not equal situations so the results are basically worthless. Ideally if you want to compare physics (which is CPU based) performance you do physics and nothing else, no shaders, no shadows, low resolution, etc. *On which the results is not only how many but how precise.* Then you will want to compare GPU performance so you do cubemaps, shading, shadows, post-processing, etc. Then logic, NPC path, NPC nodes (their brains and behavior trees). 3D Animation performance, 2D/3D GUI performance (with and without limited refresh rate). Then at the end, everything at the same time (not necessarily a stress test, just a real life scenario). Don't worry, if you don't have the sufficient time (or technical knowledge) to do it, if you do it make sure to tell that these tests should be grabbed with a grain of salt. Use each engine features to their advantages. If the end result is the same on both, then it's valid.
But at the end, yes, each engine has it's ideal application, use the engine that adjusts to your needs, no engine is "the best engine" Use the engine that adjusts to your needs.
Could you please do a better physics benchmark test? This one is pretty flawed to be honest... For instance, you are instantiating objects every frame, so its not a pure physics benchmark, a better approach would be to set the amount you want to instantiate at the start and then spawn them all at once in a single frame. Then you can use spheres instead of cubes and just let them all fall and roll around, that way there is also no script attached to actually give a command to a rigidbody to change its velocity every frame, if you want to test physics, then let physics update the rigidbody velocity internally, by just dropping the object and not having any scripts interact with it. With that approach you can easily test pure physics in both engines a lot better I think. Finally, you could use the Unity Profiler which is 1000x more accurate than the Debug Stats window in Unitys Game Window, actually that one that you get in the free one that displays FPS and thred latency is completely broken for years now and people know its not giving accurate info so you buy Pro Unity version and never look back, but I dont have the monero for that as an indie hobby dev so its fine, and I dont expect you to get it for a benchmark video, but just letting you know Unity Free Debug Stats is broken...
Spawning objects and moving them might not have been perfect to test physics you're right, but if I just spawned a bunch and dropped them, they would only have hit each other on impact so it would have just done a lag spike, it's kind of hard to measure that. At least spawning them and making them fly towards each other guarantees continuous collisions and a more "stable" test with no lag spikes. And I actually didn't use any profiler or debug stats because they can actually lag the game while they log data and you can't guarantee the unity and ue5 ones are the same. In fact, running the game in the editor can cause lag compared to built so like I said, I exported everything to a exe file before testing them so all the tests were done running in built/packaged mode in fullscreen like they would when you publish your game to avoid the editor or profiler lag and I used directly nvidia geforce stats to get FPS, CPU usage, etc from the same source everytime. The editors and profilers had no effect on the result, they were all raw exe measured with nvidia.
@@MaxMakesGames Excellent that you used build versions. That makes it more accurate, however physics collisions also happen with a ground so you can drop the spheres on a large flat plane and let them collide and roll around on that, also you can spawn 10000 spheres i a huge sphere shape, that way they wont all collide with the ground at the same time, so less lagg spikes presumably Anyway do you think you will do another benchmark of purer physics or nah?
@@user-ez7ls2du9c I doubt the results would be any different. Also colliding spheres and cubes isn't what you'll do in a real game so I doubt it means anything. I should've tried colliding meshes instead haha I mean you can try it yourself if you want. From my experience with both engines, collisions are not a big part of performance unless it's the main point of your game. A few objects colliding is not going to do anything in any of the games. I'm not gonna do an other vid but if you really want to I can test it out for you and give you the results
It means they both have strength and weaknesses, but they are both strong enough to do most things. If you want to do games with high detail graphics, UE5 is better. If you want to do games with a lot of objects, Unity ( especially with dots ) is better. But they are very close in a lot of cases so don't worry too much.
Like I said I wanted to test default settings because trying to match settings would be impossible. How would I turn off GI in Unity ? Would you realistically make a UE5 game without GI ? Probably not. I said multiple times that UE's lighting is better so even if the performance is a bit worse, it might still be worth it because of how good it looks.
@@MaxMakesGames usually people use baked lighting... GI is heavy hitter in unreal and usually not recommended.. even in places like fortnite where epic used GI... They did severe optimizations.. You can switch lighting from dynamic to static... Having said that I agree...unreal is noore interested in gaming.. but is more interested to replace tools in Hollywood.. and their recent efforts are testament of this fact...
As already pointed out in another comment a more fair comparison would've been to use the MassEntity system in Unreal. But this is an interesting video nonetheless!
@@MaxMakesGames If you end up doing that make sure to use Burst with Unity too ;) I saw you were using DOTS without Burst, that can make a difference anywhere from 2x to 10x performance
Unity DOTS does not improve performance but improve QOL for developer's to make multithreaded games. Do you use Burst with DOTS ? without Burst and multithread code DOTS can't release all performance he can made. UE same problem. You can't compare UE and Unity if you don't use multithreaded code because on single core it depends of your CPU if it's an Intel or AMD and it result with wrong bench results. Thanks for the video!
I did use Burst when it was possible ( it is not always ) and UE handles the threads itself. You can see in the CPU tests that my CPU went up to 100% or close to it so I was clearly not on a single core of it. Even if I didn't use multithread, single thread performance is really important for game dev because most things will run on the main thread. Single thread performance might be even more important than all threads performance. If all of the updates of actors/objects run in a row on 1 thread, you want that to be fast, not to somehow split the updates into 2 threads which could end up messing everything up. Multithreading is for different tasks. In most of these tests, I did 1 task to the extreme like looping all other cubes with each cube. How would that be done in multithread ? Each cube have its own thread ? In a real game you would not do that so why do it in a test ? The tests were also all obviously done on the same CPU.
@@mihawka If performance is a deal breaker for you, I'd say UE5. Unity dots is not production ready yet and UE5 is really performant with graphics which is usually the bottleneck.
Or just URP with the 2023 alpha versions, they now ship with post processing by default, just tested it today, looks pretty much on-par with default UE, only difference I found was automatic brightness depending on where you look, that's still missing. Makes more sense to use that as a default benchmark, as that's the recommended package to use
the answer to question bothering you is simple. ue5 is less memory optimized so you are RAM bottlenecked before anything else. but could also be the answer you've given, however you didnt show anything to prove it.
True I didn't look at RAM and ue uses a lot. I doubt many people would be RAM bottlenecked though. I mean a UE5 game might use around 6-8Gb of ram and most people have 16-32Gb.
@@blacknwhite1313 I'm six months ahead of you. I'm no expert yet, but it definitely gets easier. Just gotta get over that initial learning curve. Keep at it for a month or two, it'll start to make sense.
@@onerimeuse thankyou so much bro for empowering me . Bro the only thing that iam afraid up of is that coding classes and functions in unity scripts 😔..💚💚
@@blacknwhite1313 that's a difficult part, for sure, but depending on the platform your aiming to build on and the type of game you want to make there's so many tools. Playmaker, for example, is like code with virtual Legos and natural language. If you're aiming to build for computer or decent phones and just want to get your feet wet making smaller games it's a really powerful way to start (but it's not super performant, so not ideal for oculus development) Other things that e ist are frameworks that are basically games in a box, (there's an Fps one coming out) a bunch of them you still need to add code too and whatnot, but some are literally demo scenes with drop down boxes for game details and the plug in assets. Plus, the people in the community are amazing. Super helpful. Besides loads of channels like this, the forums and the people on reddit and Twitter are extremely helpful. You got this!
Very nice video. Of course there's always more to test :) A few extras for next video.
Unreal's version of ECS is called MassEntity system
You can compile down to c++ in Unity by enabling the "Ill2cpp" option in build settings.
You can also compile Unreal blueprints to cpp
Wow very nice, I didn't know those things ! Thanks
@@MaxMakesGames it would be cool if you did this video again but comparing Mass Entity against DOTS
It's worth noting that Unreal Blueprints can no longer be compiled to C++ in UE5.
Also IL2CPP isn't faster than Mono in most cases.
A year ago I was deciding weather to go with unreal or unity I saw that video of yours too, in the end i chose unreal just because I had a good experience on c++. And i think it was a good choice after all.
Glad to hear that ! Unreal is definitely a good choice. It can be a big overwhelming at first though haha
I chose Unity because of C# haha
I choose unity because of my low end PC 😁 @@stylie473joker5
It appears to me that you're doing DOTS very, very wrong, have you used the Jobs? And Burst? Because is that why Unity did these things, the HPC#
I did use burst whenever I could so for the spawner I couldn't but for the cubes looping other cubes and all I did and I just used a entity for each with schedule parallel so I didn't need to get into jobs. If it can be improved then that'd be good but it won that test anyway haha
The tests that dots failed in like physics and graphics, I had nothing to do there except spawn things and for physics change the velocity every 2 sec ( almost no cost ) so it's 99% the physics/graphics package. The only test that my code had an impact, dots won so even if I messed up it doesn't matter much.
@@MaxMakesGames
Fair enough. I think in general the use of Jobs would be more advisable, considering that as far as I remember Burst is only able to literally optimize Jobs (you need to use an attribute for that, I didn't see it in the video, but I admit I don't use Unity months ago, so maybe some things have changed).
Well, anyway, that cleared up, good job with the video.
@@diadetediotedio6918 Yea it has changed a lot. Now you can use [BurstCompatible] or something like that above the class and it will use burst in the entities for each unless you use .WithoutBurst(). It's a pretty cool system :)
Hi! You've done very cool research! Can you make the same test on physics but with "Tick Physics Async" enabled in UE5 project settings? It has to improve the performance.
Quality Rendering is also a point, but the winner is clear there i guess
Good video
I'm assuming you use Defoult 2022 render. Not HDRP or URP. I couldn'tcan't seem to find your hardware spec. Thanks for the comparison!
I don't think that CPU usage is a good idicator. For games, low CPU usage is only useful for longer battery life on mobile devices.
Unity's newest physics simulation is multithreaded, so high CPU utilization can be good for performance. It would be like comparing RAM usage. High RAM usage can also be a good thing.
Ideally you would compare performance without any rendering with identical testing scenarios.
The framerate is locked in this test. In this case, the more efficient engine will have lower CPU utilisation even if multithreading is involved.
I would like to see a comparison between UE5.1 Aparatus ECS+ Plugin and Unity DOTS
Yes that'd be interesting ! I think it would be better to wait for DOTS to support animations and fix collisions first tho
@@MaxMakesGames it looks like Unity is working on animations for DOTS, but there's no release date stated
Nanite skeletal meshes are allowed in UE 5.5 now.
@@SaschaRobitzki :O that's huge
Nice work. Definitely a few improvements that could be made to isolate what exactly is causing the issues but at that point you'd really have to get into the territory of hyper-optimizing for engine specific features. In general I'm going to assume the much higher quality rendering and shaders in Unreal probably contributes more than anything to the lower framerates in most of those tests. There's also the consideration of things like instancing, batching, and render paths. A real hardware-limit-pusher would be to render many different meshes with many different materials, maybe even throw a variety of moving realtime lights in there just to see. It would certainly go a lot further to simulate a realistic scenario that would likely happen in-game. Also a side note, my personal experience with Unity when using animated meshes is that the Animator will bring most machines to their knees after only a few hundred instances in view. I'm not sure if you used the Animator or just a basic animation clip using the legacy system but it can make a huge difference in some cases. Granted, with those kinds of numbers any engine would be better off using some kind of LoD and switching to fixed-frame texture-baked animations that are done entirely on the GPU. Overall, this is still a valuable starting point for consideration. Thanks for taking the time to measure and document all of this stuff!!
I have switched over from Unity C# to Unreal C++. And although i do appreciate the work you have put in this - i will have to argue with most of the results besides the blueprint system. But on blueprint system - the only fair comparison would be to use Unity's visual scripting tools against it.
Unreal engine is A BEAST with a very steep learning curve. Out of the box it comes very clunky performance heavy and a real slug when it comes to workflow. BUT!
C++ is a lower level programming language and that gives you more performance and control. If you can't handle a fast car properly - you will hit a wall. From my experience - Working on a C++ project ( It's not the same as blueprints based project with C++ code in it ) - Things that would take me a serious questioning on if it's viable on Unity - Unreal get's done without a hitch. I think it's more than twice as performant. I would even say it's safe to say that more than twice.
Physics - I can easily put like 5 - 10k balls ( More performance heavy than a box due to collider ) and drop them in a shaking giant box - it will handle it on like 30fps IN EDITOR with high scaling.
I think what happened here is - the script you wrote was badly optimized and simply overloaded it.
Graphics - no comment here.
Unity DOTS - Check unreal ECS and then test it. I'm not sure if it works better than Unity Dots, but again - it's not fair to compare a feature, that requires extra steps to set up With defaults of Unreal although it has it's own version of it.
I like Unity, but Unreal Destroys Unity in all of these categories.
Downsides of Unreal - Much harder to learn, C++ requires a lot of extra work as it's lower level, and less learning material.
I am looking to make a marching cubes voxel mesh generator. Based on this, it seems like Unity is the better choice? Not sure if I understood correctly or not
actually, maybe UE5 is better because of Nanite?
Both could do it. If you want to push it far you'll probably want to use compute shaders anyway which both engine will be equal with ( it runs on the GPU so it doesn't change with each engine ). Nanite could help if you want to render meshes with a ton of voxels so I'd go with UE5 if I was you :)
FPS do affect CPU load.
GeForce Experience sucks, you should use RivaTunner statistics server (comes with MSI afterburner) you are able to see all data you need, GPU usage, temp, VRAM usage; CPU cores usage, temp; RAM usage; FPS, Frametimes (FPS time-graph).
These tests were not equal situations so the results are basically worthless. Ideally if you want to compare physics (which is CPU based) performance you do physics and nothing else, no shaders, no shadows, low resolution, etc. *On which the results is not only how many but how precise.*
Then you will want to compare GPU performance so you do cubemaps, shading, shadows, post-processing, etc.
Then logic, NPC path, NPC nodes (their brains and behavior trees).
3D Animation performance, 2D/3D GUI performance (with and without limited refresh rate).
Then at the end, everything at the same time (not necessarily a stress test, just a real life scenario).
Don't worry, if you don't have the sufficient time (or technical knowledge) to do it, if you do it make sure to tell that these tests should be grabbed with a grain of salt.
Use each engine features to their advantages. If the end result is the same on both, then it's valid.
But at the end, yes, each engine has it's ideal application, use the engine that adjusts to your needs, no engine is "the best engine"
Use the engine that adjusts to your needs.
I'm sorry, but I can't seem to find your specs in the description. Am I dumb, and if yes could you help me?
Haha apparently I forgot, that's embarassing... I added them, thanks. Here they are:
CPU: AMD Ryzen 5 2600 Six-Core 3.40GHz
GPU: MSI Gaming GeForce RTX 3060 12GB 15 Gbps GDRR6 3 fans
RAM: 16Gb
OS: Windows 10
Could you please do a better physics benchmark test? This one is pretty flawed to be honest... For instance, you are instantiating objects every frame, so its not a pure physics benchmark, a better approach would be to set the amount you want to instantiate at the start and then spawn them all at once in a single frame. Then you can use spheres instead of cubes and just let them all fall and roll around, that way there is also no script attached to actually give a command to a rigidbody to change its velocity every frame, if you want to test physics, then let physics update the rigidbody velocity internally, by just dropping the object and not having any scripts interact with it.
With that approach you can easily test pure physics in both engines a lot better I think.
Finally, you could use the Unity Profiler which is 1000x more accurate than the Debug Stats window in Unitys Game Window, actually that one that you get in the free one that displays FPS and thred latency is completely broken for years now and people know its not giving accurate info so you buy Pro Unity version and never look back, but I dont have the monero for that as an indie hobby dev so its fine, and I dont expect you to get it for a benchmark video, but just letting you know Unity Free Debug Stats is broken...
Spawning objects and moving them might not have been perfect to test physics you're right, but if I just spawned a bunch and dropped them, they would only have hit each other on impact so it would have just done a lag spike, it's kind of hard to measure that. At least spawning them and making them fly towards each other guarantees continuous collisions and a more "stable" test with no lag spikes.
And I actually didn't use any profiler or debug stats because they can actually lag the game while they log data and you can't guarantee the unity and ue5 ones are the same. In fact, running the game in the editor can cause lag compared to built so like I said, I exported everything to a exe file before testing them so all the tests were done running in built/packaged mode in fullscreen like they would when you publish your game to avoid the editor or profiler lag and I used directly nvidia geforce stats to get FPS, CPU usage, etc from the same source everytime. The editors and profilers had no effect on the result, they were all raw exe measured with nvidia.
@@MaxMakesGames Excellent that you used build versions. That makes it more accurate, however physics collisions also happen with a ground so you can drop the spheres on a large flat plane and let them collide and roll around on that, also you can spawn 10000 spheres i a huge sphere shape, that way they wont all collide with the ground at the same time, so less lagg spikes presumably
Anyway do you think you will do another benchmark of purer physics or nah?
@@user-ez7ls2du9c I doubt the results would be any different. Also colliding spheres and cubes isn't what you'll do in a real game so I doubt it means anything. I should've tried colliding meshes instead haha
I mean you can try it yourself if you want. From my experience with both engines, collisions are not a big part of performance unless it's the main point of your game. A few objects colliding is not going to do anything in any of the games.
I'm not gonna do an other vid but if you really want to I can test it out for you and give you the results
I already knew when it came to using loops in blueprints at 8:10 I knew Blud was about to go down 🥹
More conclusion pls? What does this mean in reality
It means they both have strength and weaknesses, but they are both strong enough to do most things. If you want to do games with high detail graphics, UE5 is better. If you want to do games with a lot of objects, Unity ( especially with dots ) is better. But they are very close in a lot of cases so don't worry too much.
Make kerbal space program style physics simulation with great graphics to determine which one should ksp 2 been built with
Disable GI in Unreal.. else GI calculation bog down test
Like I said I wanted to test default settings because trying to match settings would be impossible. How would I turn off GI in Unity ? Would you realistically make a UE5 game without GI ? Probably not. I said multiple times that UE's lighting is better so even if the performance is a bit worse, it might still be worth it because of how good it looks.
@@MaxMakesGames usually people use baked lighting... GI is heavy hitter in unreal and usually not recommended.. even in places like fortnite where epic used GI... They did severe optimizations..
You can switch lighting from dynamic to static... Having said that I agree...unreal is noore interested in gaming.. but is more interested to replace tools in Hollywood.. and their recent efforts are testament of this fact...
As already pointed out in another comment a more fair comparison would've been to use the MassEntity system in Unreal. But this is an interesting video nonetheless!
Yea I guess I was missing that. I didn't even know it existed :(
@@MaxMakesGames Yeah it doesn't get talked about much. Still in it's early stages definitely
@@MaxMakesGames If you end up doing that make sure to use Burst with Unity too ;) I saw you were using DOTS without Burst, that can make a difference anywhere from 2x to 10x performance
Unity DOTS does not improve performance but improve QOL for developer's to make multithreaded games. Do you use Burst with DOTS ? without Burst and multithread code DOTS can't release all performance he can made.
UE same problem. You can't compare UE and Unity if you don't use multithreaded code because on single core it depends of your CPU if it's an Intel or AMD and it result with wrong bench results.
Thanks for the video!
I did use Burst when it was possible ( it is not always ) and UE handles the threads itself. You can see in the CPU tests that my CPU went up to 100% or close to it so I was clearly not on a single core of it. Even if I didn't use multithread, single thread performance is really important for game dev because most things will run on the main thread. Single thread performance might be even more important than all threads performance. If all of the updates of actors/objects run in a row on 1 thread, you want that to be fast, not to somehow split the updates into 2 threads which could end up messing everything up. Multithreading is for different tasks. In most of these tests, I did 1 task to the extreme like looping all other cubes with each cube. How would that be done in multithread ? Each cube have its own thread ? In a real game you would not do that so why do it in a test ? The tests were also all obviously done on the same CPU.
@@MaxMakesGames Ok I see. I can't make up my mind between Unity and UE5. Personnally performance is an obsession x)
@@mihawka If performance is a deal breaker for you, I'd say UE5. Unity dots is not production ready yet and UE5 is really performant with graphics which is usually the bottleneck.
@@MaxMakesGames Yes. Im waiting DOTS release since 2018 and now I think I give a try to UE5 and his MASS plugin.
To make it more apples to apples you should use Unity HDRP so the visuals would be much closer on both.
And of course, MassEntity on UE.
Great job!
Or just URP with the 2023 alpha versions, they now ship with post processing by default, just tested it today, looks pretty much on-par with default UE, only difference I found was automatic brightness depending on where you look, that's still missing. Makes more sense to use that as a default benchmark, as that's the recommended package to use
can you test Godot too
Hmmm I've never used Godot, but maybe I can try to mess with it
He's testing real tools not toys
the answer to question bothering you is simple. ue5 is less memory optimized so you are RAM bottlenecked before anything else. but could also be the answer you've given, however you didnt show anything to prove it.
True I didn't look at RAM and ue uses a lot. I doubt many people would be RAM bottlenecked though. I mean a UE5 game might use around 6-8Gb of ram and most people have 16-32Gb.
Today was my first day in unity ( its my first day with engine btw) it was a nightmare 😥😢
Haha yes it is hard at first but you will get used to it fast ! Keep it up !
@@MaxMakesGames thanks bro of empowering me💚
@@blacknwhite1313 I'm six months ahead of you. I'm no expert yet, but it definitely gets easier. Just gotta get over that initial learning curve. Keep at it for a month or two, it'll start to make sense.
@@onerimeuse thankyou so much bro for empowering me . Bro the only thing that iam afraid up of is that coding classes and functions in unity scripts 😔..💚💚
@@blacknwhite1313 that's a difficult part, for sure, but depending on the platform your aiming to build on and the type of game you want to make there's so many tools. Playmaker, for example, is like code with virtual Legos and natural language. If you're aiming to build for computer or decent phones and just want to get your feet wet making smaller games it's a really powerful way to start (but it's not super performant, so not ideal for oculus development)
Other things that e ist are frameworks that are basically games in a box, (there's an Fps one coming out) a bunch of them you still need to add code too and whatnot, but some are literally demo scenes with drop down boxes for game details and the plug in assets.
Plus, the people in the community are amazing. Super helpful. Besides loads of channels like this, the forums and the people on reddit and Twitter are extremely helpful.
You got this!
Unity use C#, it's garbage collector will mess when we use great scalable objects and burn your CPU
You can transpile C# code to compiled C++ code with il2cpp. There's still some overhead, but the performance difference is theoretically negligible
Its*
Unreal c++ also garbage collects