There's not really a TOO REALISTIC threshold, as you can always dial it down if that's not the style you're aiming for. The exciting objective of making realistic gameplay achievable is a few steps closer and something we'll likely see over the next few years as this sort of progress continues. Hats off to the UE devs
Agreed. I'd rather get the ultra realism to be able to dial it down from there, any day over having to make something look more real with upscalers. Yes. Hats off to UE team.
Human production in recent decades is now a fossil fuel. The iA now has enough raw material to make useless actors, writers, viewers and even your own mother
The Muscle Animator highlights a major reason for the uncanny valley: Intiation of movement. When a human in rest starts to move, their muscles tighten *BEFORE* any movent occurs. Here, all muscle movement is a *RESULT* of the movement - as opposed to cause for it. Also, on other, close-up samples, the fixational eye motion is missing, making the 100% stationary pupils look dead.
Convergence in eyes denotes focus. The parallel pupils in 3D models makes it feel as if the characters are staring off into space. The reason why video game faces that are not manually animated often appear.... Lifeless and as if they are unable to focus their eyes onto anything.
Same here, 12 years working on that ue4/5, not a single step is easy. And that metahuman thing is in my opinion just pure marketing, nobody wants a 350K polygon character in a video game.
@@arnaudkaho7137 Agreed. Metahumans look amazing, after you use Mesh to Metahuman to customize the look of course. But yeah, I would rather spend more engine resources on other things. The draw calls always get ya.
They need to make them more efficient. Even in cinematic scenes they drag along and there are issues with the neck and body animations. I still can’t figure out to to remove the gap between the shoulders and head while animating.
@@arnaudkaho7137 People use unreal for movies and GFX as well, It might be used in cutscenes also, and im sure it can be optmized to be used in a game. The tracking features can 100% be used+muscle anim
Amazing tech, though I am frustrated how the speakers are like "just like that" and "untouched metahuman animator solves".....I've seen enough tech demos to know that NOTHING that gets aired at these presentations is "untouched". It's HEAVILY curated. I guarantee that there are all kinds of extra tricks happening here to make this look as easy and palatable as possible in this demo.
Yes, The smile was stupid looking it started too early and lasted too long... it was awful! Worse than Final Fantasy... We've had photoreal characters for decades, the ANIMATION is where it always falls apart.
@@NextWorldVR Yeah but metahuman's characters are faster to make than before. Animation is later on the line, because better to improve animation after you get the graphic you want
@@NextWorldVR this is objectively incorrect, we haven't had photoreal characters for decades, just believable ones (half life 2s people are believable if not photoreal)
Simulation of muscle stretching FINALY. I am so tider of those plactic muscles in all games that doesn't even changes, no stretches. It made me sad. But FINALY!
Cool, but I'm still waiting for full facial muscle rigging before I'm going to be truly impressed. Full facial musculature + randomised minor muscle action is how we get over the uncanny valley IMHO.
@@mnomadvfx There are SO many variables that go into the uncanny valley. A lot of it is rendering, getting the exact reflection and refraction in the eyes, changes in blood flow under the skin, micro-wrinkles in the skin and how they anisotropically alter the specularity of the skin. The issue with trying to simulate the face from the inside out via muscle activation is that it becomes extremely difficult to get the exact expression you want, especially when you're trying to maintain the likeness of a real actor. It's easier to build a set of blend shapes (from scan data when you have a real person), blend between them, and add extra ones when you need something very specific, rather than trying to specify the physical properties of every muscle, lump of fat and skin wrinkle to get them to fold in just the right way. I know Weta built a system that allows the skin to kind of slide over the blend shapes, maintaining an appearance of physical elasticity in the motion of wrinkle and pore detail, while conforming to the scanned/sculpted shape and the animated interpolation of it, which isn't physically based.
Yeah they'll just have to leave the clothes in for now. You have every right to pin point every flaw in what we have so far people like you contribute to the flawless with your criticisms of the flawed.
They're at the stage where with the right lighting, animation, and camera work they look realistic but you can still tell the difference when you put the model next to the actor
открою вам анатомический секрет - что бы роботы не выглядели роботами, зрачки должны совершать очень быстрые движения(микро тряска), человеческий глаз это не замечает, но фокусировка на объектах заставляет глаз трястись с большой скоростью, что делает для нас глаз собеседника живым. При этом наш глаз считывает движения глаз собеседника и если этого всего нет, подкорка вам кромко кричит - перед вами киборг или труп.
Metahuman is a content creation tool. It is used to create the character art of Death Stranding 2, but Decima Engine renders their actual in-game appearance in real time, not Unreal.
Face demo: Feels like the lips are not interacting with the teeth Death Stranding 2: the hair moves in slow motion compared to reality (like the hair is underwater). Muscle demo has some proportion issues (seems to be worst around the lats) and it looks like the movements are triggering the muscles, not the muscles triggering the movements. Getting closer to real though for sure. The cloth stuff seemed pretty flawless!
Na das nenn ich mal progressiven Fortschritt. Als Architekt habe ich seit den 90iger Jahren mit 3D-Programmen Erfahrung, aber Unreal hat es wirklich auf die höchste Stufe gebracht. Toll, das zu sehen !
I dont know if the kids working today realize how much work went into early animation.. the tools now are unbelievable. Remember when rudementary cloth modeling plugin was first introduced to Max?
Nice. But why everyone still have the same 3 body types? Women’s breast vary, some men are built and cut. Like why are y’all avoiding the finer control in the body.
I want to know how to make Metahuman speak and her body animated by rigging. I can do separately with audio2face, but can't get them together. The face and body can't stick together in the sequencer.
People are quite wrong about this! It's NOT when characters look real in games and cartoons BUT when they look almost real but there is something that you cannot put your finger on or understand that is wrong so it's only slightly wrong, so they aren't either a depiction of a human OR either a human, when you enter the uncanny valley territory. A good example of graphics gone wrong is the new Dune game but a great example of when they are enough of a caricature, like in world of warcraft, that they look "good" or at least don't look weird. Look at the lord of the rings from director Ralph Bakshi movie where they recorded actual actors but then draw on them and how incredibly weird and uncanny it looks but it's not until you actually know the reason that you can put a finger on it.
Maybe I am old fashioned, but I don't really think games need ultra realistic graphics. There's a certain charm in games with particular style, not unlike something we can see everyday
I love Unreal Engine. Been following it's progress for a long time. I am excited for FSR3 and fsr upscaling with ai improvement and Unreal 5 games with FSR3 frame generation :)
Brilliant stuff.... the one thing that always gets me though, is the completely overboard mouth animations... like i get it for intense emotion anims... but just talking? ... nobody goes into full blown facial coniptions ordering a meal, showing all there teeth whener possible, really overly mouthing every little word... I suppose it's just for showing it off clearly.... but man, subtlety is a thing..
too symmetrical the faces everybody in real life doesn't have symmetrical faces but the eyes look lifeless ..its gonna get more realistic in the future though as that is the roadmap we are not going back to atari pixels only indie devs doing that
The most impressive thing about all this is that it's still not fully capable of fooling you into thinking it's real despite looking as realistic as it does Hypothetically I would even go so far as to say the only way this and even future tech will be able to convince someone that it's real is to make them first forget what reality is
Dude, your last sentence digs deep, and I have to admit that I agree with you. Reading some comments here who find Unreal 5 to be "too realistic" in its visual reproduction of humans, I wonder if their writers remember what a real human looks and acts like.
Just imagine the potential, micro managing and total customization the director can have over an actor's core performance without having multiple takes.....
I'm ready for Meta to hurry up and release their Codec Avatars they've been developing with this very same technology. The real gem in this keynote was not what it means for future flatscreen games and consoles....but what it means is On the Horizon in VR
This keeps getting recommended to me, so I had to pop in to say that most of these are not metahumans or using any of the metahuman technology. Death Stranding 1 & 2 are proprietary systems and the character scans, models, motion capture and blendshapes were done by film artists (I think at DNEG, if I remember, but DD also does a lot of games contracts, like The Quarry. All of Sony's in-house stuff is done by their film artists at Visual Arts Studio or outsourced to a VFX house like Digital Domain). So we can be pretty sure that OD (also from Kojima) is also mostly outsourced. Kojima also used pre-rendered trailers in V-Ray (not Unreal) for Death Stranding 1, so I'm pretty sure OD is, also. The ML Deformer for muscle simulation doesn't work for production, yet, and it's specifically for film, not games. Muscle sim is still done in 3rd party software (Houdini or Ziva before the shutdown) and brought in as alembics (again, for film and virtual production. It does not work for games, yet, because of the vertex influence limits. It's way too computationally heavy). Rise of Hydra - Skydance Interactive is part of Skydance, the VFX house. Those aren't metahumans and I highly doubt Marvel would let them use Metahuman tech. Senua isn't a metahuman, either. They only used the facial mocap system (and I think only for editing the facial capture in the engine, not capturing the performances). Basically, the entire industry is a lie and Epic keeps milking it.
Muscle animator seems pretty nice :o however am wondering how much of it can we get in a simulator or a game 🤔🤔 Maybe it will only be available in such high detail for simulators and education softwares instead
This technology has reached a point where, if it’s not exactly like actual reality, you only notice the flaws. I appreciate the quality, but it’s been at this stage for so long that, even with constant improvements, it doesn’t break the final barrier that would make me go 'wow' anymore.
No. It won't happen. You need some part of the uncanny in animation just to tell the difference between the real person. Otherwise, it will be used in the wrong way that will harm people.
Add images to 3d model to consumers for free, and sell virtual 3d fitting rooms to online retailers. The saving from returned product should make up for the cost of the virtual 3d fitting rooms. While making clothes online shopping much easier for people.
The sync between audio and lip movement is noticeably out most of the time, unless they are speaking slowly, then it looks fine. Which is odd, that shouldn't be a hard thing to get right when you have complete control of both. Maybe this issue got introduced during the editing of the youtube video rather than by UE
but the thing that exposes that they are a model are how unlifelike their eyes and mouth move, in reality your eyes move in milliseconds, so they need to move faster, you also blink faster so they need to make that too. Basically not make their face muscle movements so slow because in reality you do it really fast because your brain is doing it for you. and also no person wiggles that much when they stand still
There is still a lot of Uncanny Valley in the expression and emotion.. Part of the face being totally dead and expressionless, while other areas overcompensate, making the expressions look like some kind of psychopathic bells palsy sufferer. It will get there, but not close yet.
muscle animation system looks great, but i think either there's a small bug or the character model is weird, cos he has virtually NO LAT DEVELOPMENT lmao
what if we can do all this with a prompt, then refine it with another prompt, and then finishing touch with prompt, then save it with prompt, and invoke the character movement placement etc with prompt.
I noticed no one wants to make the perfect looking female Can't wait till I can do it. Also we need elfs and fairies, cat people, fox people, Monsters undead. Fantasy of all types. Plus sci-fi fantasy. This is going in the right direction it looks amazing.
@@xvil9x932 thanks, I read about it appears Metahuman is shaping the future of the 3D world, the technology is fast and the results are accurate, Kojima production has a unique way in developing new technologies when use them, like they did with the Fox engine, the Disema Engine and know by using the unreal engine.
I don't necessarily like the look of them. They have that uncanny "doll" vibe. They can look really good though. I just prefer a bit more stylized characters myself.
I agree. It’s like impressive now but LA Noir was really impressive when it came out too and now it just looks like a video game. Same shits gonna happen with this when everyone’s used to it
It looks realistically better than anything we’ve seen before but I’m scared that a lot of games will start using it, leading to a very samey look between games
There's not really a TOO REALISTIC threshold, as you can always dial it down if that's not the style you're aiming for. The exciting objective of making realistic gameplay achievable is a few steps closer and something we'll likely see over the next few years as this sort of progress continues. Hats off to the UE devs
Agreed. I'd rather get the ultra realism to be able to dial it down from there, any day over having to make something look more real with upscalers. Yes. Hats off to UE team.
Human production in recent decades is now a fossil fuel. The iA now has enough raw material to make useless actors, writers, viewers and even your own mother
pre-rendered still far ahead of anything real time when it comes to realism
look at brothers: a tale of two sons💀
WHAT I MOST WANT LIKELY DEMAND IF HAVEING A LOW STORAGE TO BE AVAILABLE USING ACCESS TO USE IT AND NON INTERRUPTED ON CHRACTHER MAKING
Realistic, but there's something that happens when they smile or do any exaggerated expression. My mind just rejects it.
Final Fantasy all over again. Applying video captured 'rotoscoping' will NEVER give the results that a professional Animator can.
Mouths still uncanny Valley. They need to focus on less is better.
They will improve that as soon as it is in their roadmap
False you're just looking for something to complain about
@@NextWorldVR nah they'll figure it out sooner or later, but i'd say sooner
The Muscle Animator highlights a major reason for the uncanny valley: Intiation of movement. When a human in rest starts to move, their muscles tighten *BEFORE* any movent occurs. Here, all muscle movement is a *RESULT* of the movement - as opposed to cause for it.
Also, on other, close-up samples, the fixational eye motion is missing, making the 100% stationary pupils look dead.
They are not getting realisitc, yes they are very detailed rich etc, but still no realism in dead eyes
@@12126679 To be fair, this sounds like a few real former colleagues I've had.
Convergence in eyes denotes focus. The parallel pupils in 3D models makes it feel as if the characters are staring off into space. The reason why video game faces that are not manually animated often appear.... Lifeless and as if they are unable to focus their eyes onto anything.
“And just like that…”
Naw dude, I’ve been using Unreal for years now. Nothing in that engine is “just like that”
Same here, 12 years working on that ue4/5, not a single step is easy.
And that metahuman thing is in my opinion just pure marketing, nobody wants a 350K polygon character in a video game.
@@arnaudkaho7137 Agreed. Metahumans look amazing, after you use Mesh to Metahuman to customize the look of course. But yeah, I would rather spend more engine resources on other things. The draw calls always get ya.
They need to make them more efficient. Even in cinematic scenes they drag along and there are issues with the neck and body animations. I still can’t figure out to to remove the gap between the shoulders and head while animating.
@@arnaudkaho7137 People use unreal for movies and GFX as well, It might be used in cutscenes also, and im sure it can be optmized to be used in a game. The tracking features can 100% be used+muscle anim
@@arnaudkaho7137If I'm not mistaken the recent Hellblade 2 uses metahuman for the characters and that game is pretty well optimized
Amazing tech, though I am frustrated how the speakers are like "just like that" and "untouched metahuman animator solves".....I've seen enough tech demos to know that NOTHING that gets aired at these presentations is "untouched". It's HEAVILY curated. I guarantee that there are all kinds of extra tricks happening here to make this look as easy and palatable as possible in this demo.
Death Stranding 2 is decima engine.. Not Unreal Engine.
And for open world look Amazing line UE.
They are using MetaHuman from Unreal Engine in Death Stranding 2. Its first 3rd party game engine which use it
@@DimonDeveloper No. Japanese devs have their own engines.
@@benkraze3103 This is satire I hope
@@benkraze3103they are using guerrillas DECIMA engine do research or watch the trailer properly
Now i would like to be 20 years younger.
This is to late for that new technology in games. We are passed our point of fun :/ .
With MetaHuman a couple of gens further down the line + high quality VR you could be as young or old as you want to be.
The Starfield developers have a lot to learn - if you take into account the beginning of this video (the character's face).
Yes, The smile was stupid looking it started too early and lasted too long... it was awful! Worse than Final Fantasy... We've had photoreal characters for decades, the ANIMATION is where it always falls apart.
@@NextWorldVR Yeah but metahuman's characters are faster to make than before. Animation is later on the line, because better to improve animation after you get the graphic you want
@@NextWorldVR this is objectively incorrect, we haven't had photoreal characters for decades, just believable ones (half life 2s people are believable if not photoreal)
Wow people still talking pot shots at Starfield? Give it a rest already.
@@sub-jec-tiv same except people defending it
Starfield says “hello!” 😂
if the characters looks like they're breathing, like subtle chest movement and shoulder..they will prolly look more alive and real
Simulation of muscle stretching FINALY.
I am so tider of those plactic muscles in all games that doesn't even changes, no stretches. It made me sad. But FINALY!
Fight Night Round 6 must come!
@@turrican4d599 Tekken 8. Not ideal, but it's something at least. Like THE beninging...
in de beninging...in de beningin....IN DE BENINGIN.
@@Gesensormeningeal layer 😂
Cool, but I'm still waiting for full facial muscle rigging before I'm going to be truly impressed.
Full facial musculature + randomised minor muscle action is how we get over the uncanny valley IMHO.
@@mnomadvfx There are SO many variables that go into the uncanny valley. A lot of it is rendering, getting the exact reflection and refraction in the eyes, changes in blood flow under the skin, micro-wrinkles in the skin and how they anisotropically alter the specularity of the skin. The issue with trying to simulate the face from the inside out via muscle activation is that it becomes extremely difficult to get the exact expression you want, especially when you're trying to maintain the likeness of a real actor. It's easier to build a set of blend shapes (from scan data when you have a real person), blend between them, and add extra ones when you need something very specific, rather than trying to specify the physical properties of every muscle, lump of fat and skin wrinkle to get them to fold in just the right way. I know Weta built a system that allows the skin to kind of slide over the blend shapes, maintaining an appearance of physical elasticity in the motion of wrinkle and pore detail, while conforming to the scanned/sculpted shape and the animated interpolation of it, which isn't physically based.
I really hate that they now have UEFN-only features.
Mesh and lighting is really good. But the animation and skin sliding / flesh deformation has still a loooong way to go
yea go provide us with your own technology
@@theflame45 I don’t have my own and why should I? And why would you want new tech from me a single individual? Odd request.
Yeah they'll just have to leave the clothes in for now. You have every right to pin point every flaw in what we have so far people like you contribute to the flawless with your criticisms of the flawed.
They're at the stage where with the right lighting, animation, and camera work they look realistic but you can still tell the difference when you put the model next to the actor
Its not too realistic, this is what we've been waiting for!
Still not realistic.
Wow, Elle Fanning in Death Stranding 2!
4:40 I was like "Ezekiel from TWD?" 5:09 ahhh, yess
открою вам анатомический секрет - что бы роботы не выглядели роботами, зрачки должны совершать очень быстрые движения(микро тряска), человеческий глаз это не замечает, но фокусировка на объектах заставляет глаз трястись с большой скоростью, что делает для нас глаз собеседника живым. При этом наш глаз считывает движения глаз собеседника и если этого всего нет, подкорка вам кромко кричит - перед вами киборг или труп.
Way to impressively make any game look the same. Literally. But I admit I appreciate the tech advancements that make this possible.
The irony of showing Kojina stuff on an unreal demo when they used unity's Ziva RT to make the body deformations... :)
I loved Khary Payton in The Walking dead, glad to see him in the gaming scene as well.
Metahuman is a content creation tool. It is used to create the character art of Death Stranding 2, but Decima Engine renders their actual in-game appearance in real time, not Unreal.
The initial video is still deep inside the uncanny valley
Face demo: Feels like the lips are not interacting with the teeth
Death Stranding 2: the hair moves in slow motion compared to reality (like the hair is underwater).
Muscle demo has some proportion issues (seems to be worst around the lats) and it looks like the movements are triggering the muscles, not the muscles triggering the movements.
Getting closer to real though for sure.
The cloth stuff seemed pretty flawless!
Na das nenn ich mal progressiven Fortschritt. Als Architekt habe ich seit den 90iger Jahren mit 3D-Programmen Erfahrung, aber Unreal hat es wirklich auf die höchste Stufe gebracht. Toll, das zu sehen !
I dont know if the kids working today realize how much work went into early animation.. the tools now are unbelievable. Remember when rudementary cloth modeling plugin was first introduced to Max?
23:08 nice god of war reference 🤣
Nice. But why everyone still have the same 3 body types? Women’s breast vary, some men are built and cut. Like why are y’all avoiding the finer control in the body.
And I'm also missing the ability to create MetaHumans of different ages, especially younger versions like baby, kid and teenager
Exactly. I hate this about meta human. Incredible easy to use tool but leaves out one thing that is super important and is harder to use.
Normal guys reacting metahuman: Wowie i can create my digital ver!
Devs reacting metahuman: So many basic created realistic NPCs😈
DS2 will run on Decima, what does that have to do with UE5?
I’m really impressed with the clothing system, that’s badass
I want to know how to make Metahuman speak and her body animated by rigging. I can do separately with audio2face, but can't get them together. The face and body can't stick together in the sequencer.
OMG THAT WAS AWSOME
People are quite wrong about this! It's NOT when characters look real in games and cartoons BUT when they look almost real but there is something that you cannot put your finger on or understand that is wrong so it's only slightly wrong, so they aren't either a depiction of a human OR either a human, when you enter the uncanny valley territory. A good example of graphics gone wrong is the new Dune game but a great example of when they are enough of a caricature, like in world of warcraft, that they look "good" or at least don't look weird. Look at the lord of the rings from director Ralph Bakshi movie where they recorded actual actors but then draw on them and how incredibly weird and uncanny it looks but it's not until you actually know the reason that you can put a finger on it.
Maybe I am old fashioned, but I don't really think games need ultra realistic graphics. There's a certain charm in games with particular style, not unlike something we can see everyday
I love Unreal Engine. Been following it's progress for a long time. I am excited for FSR3 and fsr upscaling with ai improvement and Unreal 5 games with FSR3 frame generation :)
Brilliant stuff.... the one thing that always gets me though, is the completely overboard mouth animations... like i get it for intense emotion anims... but just talking? ... nobody goes into full blown facial coniptions ordering a meal, showing all there teeth whener possible, really overly mouthing every little word...
I suppose it's just for showing it off clearly.... but man, subtlety is a thing..
that is massively impressive...i should admit
Fantastic progress. Still sitting in that uncanny valley though. Not quite there yet.
I think if we aimed to recreate the “cinema look” from classic cinema instead of modern cinema it would be much more convincing.
Thank god it’s still easy to see the CG giveaways.
Its too perfect, a real human has imperfections
too symmetrical the faces everybody in real life doesn't have symmetrical faces but the eyes look lifeless ..its gonna get more realistic in the future though as that is the roadmap we are not going back to atari pixels only indie devs doing that
That muscle system demonstration was insanely mindblowing. Unreal is way too real for me now. 😍
The cap and black panther animations look absolutely Amazing. Wow!
The most impressive thing about all this is that it's still not fully capable of fooling you into thinking it's real despite looking as realistic as it does
Hypothetically I would even go so far as to say the only way this and even future tech will be able to convince someone that it's real is to make them first forget what reality is
Dude, your last sentence digs deep, and I have to admit that I agree with you.
Reading some comments here who find Unreal 5 to be "too realistic" in its visual reproduction of humans, I wonder if their writers remember what a real human looks and acts like.
Just imagine the potential, micro managing and total customization the director can have over an actor's core performance without having multiple takes.....
honestly the most exciting thing is cloth sim and muscle system...
I'm ready for Meta to hurry up and release their Codec Avatars they've been developing with this very same technology.
The real gem in this keynote was not what it means for future flatscreen games and consoles....but what it means is
On the Horizon in VR
Bioware used to make games with characters that seemed so realistic and cinema quality. I miss those days.
8:05 "convincing tongue animation" 😳😳
Do something with hair movement. It's like they're underwater. In many games, no game developer has paid any attention to this for many years.
Ultra geil
This keeps getting recommended to me, so I had to pop in to say that most of these are not metahumans or using any of the metahuman technology. Death Stranding 1 & 2 are proprietary systems and the character scans, models, motion capture and blendshapes were done by film artists (I think at DNEG, if I remember, but DD also does a lot of games contracts, like The Quarry. All of Sony's in-house stuff is done by their film artists at Visual Arts Studio or outsourced to a VFX house like Digital Domain). So we can be pretty sure that OD (also from Kojima) is also mostly outsourced. Kojima also used pre-rendered trailers in V-Ray (not Unreal) for Death Stranding 1, so I'm pretty sure OD is, also. The ML Deformer for muscle simulation doesn't work for production, yet, and it's specifically for film, not games. Muscle sim is still done in 3rd party software (Houdini or Ziva before the shutdown) and brought in as alembics (again, for film and virtual production. It does not work for games, yet, because of the vertex influence limits. It's way too computationally heavy). Rise of Hydra - Skydance Interactive is part of Skydance, the VFX house. Those aren't metahumans and I highly doubt Marvel would let them use Metahuman tech. Senua isn't a metahuman, either. They only used the facial mocap system (and I think only for editing the facial capture in the engine, not capturing the performances). Basically, the entire industry is a lie and Epic keeps milking it.
I'm glad Metahumans helps with better gameplay
Muscle animator seems pretty nice :o however am wondering how much of it can we get in a simulator or a game 🤔🤔 Maybe it will only be available in such high detail for simulators and education softwares instead
incredible
This technology has reached a point where, if it’s not exactly like actual reality, you only notice the flaws. I appreciate the quality, but it’s been at this stage for so long that, even with constant improvements, it doesn’t break the final barrier that would make me go 'wow' anymore.
I think we are just 10 years away from seeing animation that looks 100% real
No. It won't happen. You need some part of the uncanny in animation just to tell the difference between the real person. Otherwise, it will be used in the wrong way that will harm people.
Add images to 3d model to consumers for free, and sell virtual 3d fitting rooms to online retailers. The saving from returned product should make up for the cost of the virtual 3d fitting rooms. While making clothes online shopping much easier for people.
Great👍👏
I feel like the next big step would be to improve subsurface scattering
Damn impressive!
words were never chosen more carefully.
The sync between audio and lip movement is noticeably out most of the time, unless they are speaking slowly, then it looks fine.
Which is odd, that shouldn't be a hard thing to get right when you have complete control of both.
Maybe this issue got introduced during the editing of the youtube video rather than by UE
I can't wait to see this in games in.... 15 years
shoutout to the Xbox Series S for holding back the industry
This is insane
Ari Folman's Congress turns real.
but the thing that exposes that they are a model are how unlifelike their eyes and mouth move, in reality your eyes move in milliseconds, so they need to move faster, you also blink faster so they need to make that too. Basically not make their face muscle movements so slow because in reality you do it really fast because your brain is doing it for you. and also no person wiggles that much when they stand still
And for some reason still well within the uncanny valley
It looks impressive but they look like CG imo. Not even 'uncanny valley' like. Which is fine. It's just more data.
When can we get this detail in VR?
nVidia 6000 series graphics cards, next gen OLED headsets. Maybe.
can we have stylized imports like iclone has?
This will be great for tailor made clothing
It'd be nice if even a handful of modern games looked remotely like the UE demos from 4 years ago, let alone this stuff.
I love you guys you are the best
Please work on your converting skills even 4k quality is like 720P with extremely bad color bit rate.
This engine is amazing I just wish I had the talent and assets to push it to the limit.
When there is a 'metaverse' that is indistinguishable from the real world, I am going in and I'm not coming back.
Ready Player One?
Yes ... yes I am.
By definition, if it's indistinguishable from the real world, it won't be "Ready Player One".
It will be... the real world.
There is still a lot of Uncanny Valley in the expression and emotion.. Part of the face being totally dead and expressionless, while other areas overcompensate, making the expressions look like some kind of psychopathic bells palsy sufferer. It will get there, but not close yet.
Unreal needs to support C# and allow us to easily import Unity projects into it. I bet most Unity developers would change to Unreal.
muscle animation system looks great, but i think either there's a small bug or the character model is weird, cos he has virtually NO LAT DEVELOPMENT lmao
At this point I am convinced that the uncanny valley effect is 100% due to the mouth and not the eyes.
what if we can do all this with a prompt, then refine it with another prompt, and then finishing touch with prompt, then save it with prompt, and invoke the character movement placement etc with prompt.
I noticed no one wants to make the perfect looking female Can't wait till I can do it. Also we need elfs and fairies, cat people, fox people, Monsters undead. Fantasy of all types. Plus sci-fi fantasy. This is going in the right direction it looks amazing.
Still death stranding 2 or the beach or whatever they call it used a deferent engine the Decima engine
and showcase insane amount of details as well
They used UE's Metahuman for the characters
@@xvil9x932 thanks, I read about it appears Metahuman is shaping the future of the 3D world, the technology is fast and the results are accurate, Kojima production has a unique way in developing new technologies when use them, like they did with the Fox engine, the Disema Engine and know by using the unreal engine.
Yes. By the way, kojima also used metahuman in his new project titled "OD"@@aladdinsplayz
"too realistic" while I get that primal "wtf that's not human" uncanny valley reaction
We aren't there yet. Inching closer and closer though.
21:44 Beautiful girl from "I Follow Rivers - Lykke Li"
Imagine being up there as an actor and promoting the tech that will put you on the unemployment line.
impressive !
60MB? Wow.. that's probably worth using.
That thumbnail, though. Right side: JRPG. Left side: Western game developed by Bioware.
Bro NPCs are so real now, that the player is the one that looks like the NPC💀
Sorry, but the results here are still far from realistic.
Almost, and very impressive! But most of the time there's still that uncanny valley. Maybe in another 5-10yrs they'll get there. But not quite yet.
Here's a thought imagine Unreal engine in 10 years with all the AI developments on the horizon scary!
Anyone in Melbourne Australia able to use this to create a proof of concept for a science fiction film.
The future is here. Thanks Unreal Engine people!
Is it not meant to match? The finished one at 9:48 - 9:52 does not match the original eye movements for example like here 7:25 - 7:29. Why is that?
Am i the only one that doesn't like the metahumans look?
I don't necessarily like the look of them. They have that uncanny "doll" vibe. They can look really good though. I just prefer a bit more stylized characters myself.
They're not bad, they're impressive but still have that uncanny feeling
Not by a longshot
Faces still seem problematic. The bodies though look good.
I agree. It’s like impressive now but LA Noir was really impressive when it came out too and now it just looks like a video game. Same shits gonna happen with this when everyone’s used to it
It looks realistically better than anything we’ve seen before but I’m scared that a lot of games will start using it, leading to a very samey look between games