If any games in the future will use this type of "dream-like" rendering, it should honestly be horror games, this could actually potentially simulate nightmares very well with how it tries to brush off incosistancies yet giving you the feeling that something is off and things aren't the way you remembered or should be, true dread!
I’m sure it’s fast, but I wonder how close to real-time we can get it within the next few years. It’s blowing my mind that the old theory that realistic graphics meant recreating as many physical properties of a scene as possible (ray tracing, viscosity equations, heading towards atomic or quark level detail etc) might just be rendered moot by AI technologies that simply “hallucinate a scene” for you. Saves design work AND rendering costs. Super exciting paper!
Nvidia is probably low-key working on a graphics card with a majority of tensor cores. If the game developing scene can follow and they manage to harness this power, but with full creative control to the developers, I think it will be released shortly after. I could see myself buying an AI graphics card in 2030 with games being rendered fully realistic in VR. At the very least, this paper proves it is not just a distant dream.
While this might be fun for us consumers, we should keep in mind that for what you describe to happen (AI doing all the work), video game designers/artists/directors/etc. would have to be in complete control of the AI and its internals, otherwise they wouldn't be able to give the true vision that they had. I believe this type of research is very exciting, but I don't think it will thrive in video games.
@@thoughtsofapeer "If the game developing scene can follow..." That's the problem, though. That would require every video game studio to completely restructure themselves (artists/level designers/directors would be close to useless as the AI would do all the work) and change the way to make video games. Btw, the AI would have to produce deterministic results every time, otherwise the game's ambitions/vision wouldn't be transcribed well or the development cycle would have too many iterations to tune the AI. Very promising, but still very far away in terms of practicality (i.e. it's great on paper, but very difficult to apply in production)
These reconstructed videos look like the dreams when you sleep. Actually that's the way it works. When you sleep the neural networks in your brain reconstruct the pictures by abstract patterns.
It's probably not just the dreams. Given that our senses have pretty low bandwidth, a lot what we think we see is in fact just "reconstructed" in real time from what we think we should be seeing.
Maybe in the future a characteristic of low budget indie games will be super realistic graphics, because it would be way harder to create a specific art style instead of just feeding an algorithm with some stock footage.
SirBandicake that’s a crazy thought, and possibly quite realistic. Look at indie movies. It’s not hard to create a certain movie style or add expensive vfx, but replicating life is very easy. The same will happen with games. Amazing thought
@@androkon6920 Id say we are already there in some sense. Take advantage of the A.I. used to create merged animals (google deep mind pcitures animals and you will see what i mean) and just tweak it a bit. There are probably many other different A.I. machines that can create desirable looks.
@@martiddy Oh yeah it's already at that point. In game water on the road looks clear and clean as there is not a particle of dust in it. And even the walls will have some kind of reflective surface.
“computer, generate a fighting game/rhythm game-hybrid set in ireland during the potato famine, with a soundtrack by marilyn manson” computer: PRESS X TO START
This is incredible! Really breaking through with some crazy stuff here - I'm imagining this being useful for making low-budget animation storyboards and possibly actual animation/CG/compositing down the line. That'll be amazing!
just wow. That's amazing. It seems that one day, perhaps video games or computer graphics will be synthesized on the fly. even if it was just textures and lighting being generated, it would provide a huge step forward
I wouldn't be surprised if artists soon- instead of becoming obsolete- will be able to create an art style to feed into an AI that it can then apply to anything they want.
And people are still convinced there will always be jobs for humans and things AI can't do. It's not a matter of IF an AI will take your job. It's when.
@@KuraIthys I think that's a great prospect, assuming humans aren't so dumb as to keep clinging on our idiotic economic system, thus doing everything in their power to prevent it. We'd just have to make that one step and could live a life where we could just enjoy life and do what we want while the machines provide for us. No longre slaving away to make others richer, no longer slaving away doing some bullshit. I don't accept the fear mongering of people losing their jobs and only rich people profiting from this. That would make no sense. For the current system to work, people need to buy stuff and without them getting money it would collapse. Once we can establish total automation there would be no need for that. The machines would provide and people could do what they want. Many people who have some sort of Art/Design degree only find jobs in the advertisement industry. They could finally do their own art instead of art meant to sell something. The technology we'd have would allow a single person to do his own personal project alone. People could spend their free time doing more fullfilling jobs, things they'd like to do. Or just spend time with family or hobbies. Scientists wouldn't be bound by grants and could do the research they want (only restriction being the recources available, which need to be distributed fairly). People who define themselves by doing a dumb repetitive job for 8+ hours a day, not knowing what to do with free time at their hands, could keep on rolling that stone up a hill over and over like Sisyphos. They shouldn't be the ones stopping us from gaining real freedom. What I fear more than AI taking over are short sighted humans clinging to the status quo.
May the Science be with You With automation, all the basic goods like food and water would basically be free. This would mean that people would consume as much as would suit/satisfy them, as there would be no price barrier holding them back. This increased consumption would lead to the earth's resources being depleted much faster than they are already being today - and it means that the automated/free industry would need to be 100% sustainable and circular right from the outset.
@@electron8262 Do you have any genuine idea how the current economy functions. We are currently living in an economy of exploitation of natural recouces and obsolescence. And there is lots of planned obsolescene. We already have overproduction. A huge percentage of the stuff we produce gets thrown into the trash because it didn't sell, without having been used even once. People won't suddenly eat more than they already do just because they'll be provided with food. Look at the US how fat many poor people are. With better automation, production could be adapted to demand and not blind mass production. Quality products could be made that will last much longer instead of products that are literally designed to fail, so that people will buy a new one regularly. Of cource recycling needs to be ramped up. It's our current economy that prevents recycling, because it's often cheaper to turn an entire mountain to dust to get the metals out than recycle our trash. AI assisted automation can help sort better. People in third and second world countries destroy their habitats to make quick bucks. The rainforests are burned down by locals to grow crops. It's economically more lucrative short term to sell the teak wood than preserve nature. Why let a forest stay when you can sell it and make money. Poachers poach because rich people are paying them handsomly, compared to what they'd earn doing a normal job in their country. Without the desire to make money those people would see no reason to destroy their environment.
I've seen this paper already, and it truly is amazing! Even labeling data from an image is hard ebough, but having it be replaced back from label to something recognizable by humans is just insane.
Wow the left and right viewpoints are actually consistent enough to work as a makeshift stereogram. Do parallel 3d veiwing by unfocusing your eyes until the two street signs converge and then bring it into focus and you can check it out without special equipment. Very cool.
feels like that game went viral a month and a half ago, after being completely unheard of for some two decades... weird how that happens! is it chaotic social dynamics of meme culture? is it social media platform recommendation algorithms? or is it a feedback loop between both, that pulls things out of nowhere & makes them mainstream for no reason
Makes me think of dreams. Our brain generates scenery without remembering previous frames. Everytime we look at a clock, the time changes...Text is mostly mixed letters. If our brains were powerful enough, our dreams would be perfectly simulated realities.
One thing came to my mind while seeing this is a game that was shown long back in Pewdiepie's channel where it did not use any sort of fancy graphics but was like a recorded old hand cam video in real time and the characters would move just as if the video was playing when you interact accordingly , great thing is it did not even require any fancy hardware to play as it was one of those beginner Indie titles .. now one can imagine an AI powerful enough where it can make an interactive video where one could move the characters accordingly in any direction through deep learning and hopefully it does not exactly need a card but van play it with the same specs used to play ordinary videos on any device.
Is there a way to use this research in combination with NVIDIAs gauGAN? This really has so much creative potential, but i do not know even a bit about coding.
One time i had a seizure and the seizure dream during it looked exactly like this, I was in a car driving home and the trees, cars, and buildings ran past me like a blur...seeing it rendered by an AI was unsettling to say the least.
I can't wait for such technologies to be applied to photoscans. Those things usually take hundreds of photos and hours of processing, and models still end up having holes and distortions. But here? AI can predict how new photos would look like or even easily create depthmaps from individual photos.
Next step is to make this a filter for photorealistic video game rendering. You could have a game look "almost" real with classical rendering and then apply this AI to get the last details become as close to real life as possible. I would use a closer reference though, so the AI only had to correct small details.
it's funny I just read the title and the comments without watching the video, but got an quite good understanding of what the content of the video must be. I wonder if you could build an Ai that just does that: producing video predictions with the title and the comments as the data.
@@Soul-Burn That usually happens in dreams with things that we don't usually remember very well (like a random person or place you saw one day), but if it's something we remember well, then is gonna appear consistently in our dreams.
@@martiddy Yeah I have experienced lucid dreams and I can tell things like intricate patterns just cannot stay consistent in my dreams. Also clock and text on calendar seemed gibberish.
I expect we'll see neural rendering become the future of video games, VR etc, replacing rasterization and ray tracing, etc. and it will arrive faster than we think. The real application will be AR - it's uniquely suited to that. What a time to be a live!
In lucid dreams I often notice the same breakages, If i move away from, and then back to an object, it gets recreated without consistency. Cool to see these AI dreams advancing past my own.
I used to think people were lying when talking about lucid dreaming. Then it finally happened to me and it was amazing! Only happened once in 38 years (that I remember), but I am completely jealous of people who can do it at will or just have it happen to them often.
@@Mortiis558 the old trick: look at the back of your hands and try to wake up, i'll wait... Make a habit of doing that, say 5x a day. Sounds stupid..? Until you do it while dreaming, triggering a more lucid dream-state.
This neural rendering looks kind of the opposite of something I was always interested. The problem would be to make a high quality picture out of multiple low quality photos across some kind of plane (like trying to do a top view picture of your neighbourhood using a drone). Is there some kind of solution to that?
Question:- How can one start about designing simple applications and AIs, and slowly upgrading to works like these? What's the starting point, the journey and the beyond?
Learn Tensorflow. Watch Videos on YT. Then start very, very small like make an AI that trys to find the next number or something like that. Because Tensorflow is from Google, there are some reeaaally good tutorials. They teach you how to make an AI that says what type of clothing that is (shoe, trouser etc)
This makes me really curious as to whether you could take a game like Half-Life Alyx, record hours of regulated gameplay footage on highest graphics settings then, like in this vid, generate a semantic map version of the same gameplay or just play back that session but with graphics on potato mode. Then have a neural net like this transform the potato mode into highest gfx. In real-time, though I don't know how long until that's possible.
Can you imagine the police investigating a scene where they got some camera recordings of a murder? Just like in some ''unreal'' movies for our time now, they'll be able to recreate an entire scene of the crime in 4d using basic calculations to determine whether something happenned or not. And why not using Google VR orsomehting like that!! I don't know if someone already thought about it, or maybe the government intelligence agencies is already working on that, but it at least for me looks pretty obvious to be a great tool already!And by the way, it's very scary of how fast everything is evoluting!
We have material synthesis AI. dynamic Fog/fluid AI, and photorealistic generation AI. It makes me wonder if a game used all 3 How hard would it be to make very in-depth and realistic games? Could even use procedural generation techniques an have an infinite worldspace.
Holy cow, this is so useful. Imagine designing a game with basic flat cell-shading and allowing the AI to fill in the details in real-time
Is it fast enough to render tho?
AryaZaky Iman Fauzy It will be, two more papers down the line
turn a cow into a bull
I don't know much about graphics, but does it even need cell-shading?
Would it put environmental artists out of job? Texture artists?
If any games in the future will use this type of "dream-like" rendering, it should honestly be horror games, this could actually potentially simulate nightmares very well with how it tries to brush off incosistancies yet giving you the feeling that something is off and things aren't the way you remembered or should be, true dread!
We need Hideo Kojima to get the Silent Hill license and then train the AI to really mess with our pareidolia
Or Apocalyptic Games.
Put time travel into the horror mix with some high level conspiracy!
And
Hououin kyoma as the protagonist
Shit gets real!😎
Simulate reality.
I’m sure it’s fast, but I wonder how close to real-time we can get it within the next few years. It’s blowing my mind that the old theory that realistic graphics meant recreating as many physical properties of a scene as possible (ray tracing, viscosity equations, heading towards atomic or quark level detail etc) might just be rendered moot by AI technologies that simply “hallucinate a scene” for you. Saves design work AND rendering costs. Super exciting paper!
Nvidia is probably low-key working on a graphics card with a majority of tensor cores. If the game developing scene can follow and they manage to harness this power, but with full creative control to the developers, I think it will be released shortly after. I could see myself buying an AI graphics card in 2030 with games being rendered fully realistic in VR. At the very least, this paper proves it is not just a distant dream.
Yeah holy moly. just 5-10 more years and well have this.
Is future
While this might be fun for us consumers, we should keep in mind that for what you describe to happen (AI doing all the work), video game designers/artists/directors/etc. would have to be in complete control of the AI and its internals, otherwise they wouldn't be able to give the true vision that they had.
I believe this type of research is very exciting, but I don't think it will thrive in video games.
@@thoughtsofapeer "If the game developing scene can follow..." That's the problem, though. That would require every video game studio to completely restructure themselves (artists/level designers/directors would be close to useless as the AI would do all the work) and change the way to make video games.
Btw, the AI would have to produce deterministic results every time, otherwise the game's ambitions/vision wouldn't be transcribed well or the development cycle would have too many iterations to tune the AI.
Very promising, but still very far away in terms of practicality (i.e. it's great on paper, but very difficult to apply in production)
These just keep getting better along with my grip on my papers
Just what you need to make your own 90's era reality-warping FMV horror game.
Night Trap!
Jerry! Where are we?
Harvester!
I don't want to die
Or a procedurally generated 90's First person dream simulation game.
These reconstructed videos look like the dreams when you sleep. Actually that's the way it works. When you sleep the neural networks in your brain reconstruct the pictures by abstract patterns.
It's probably not just the dreams. Given that our senses have pretty low bandwidth, a lot what we think we see is in fact just "reconstructed" in real time from what we think we should be seeing.
*I dropped all my other papers you told me to hold on to!* This is amazing!
Maybe in the future a characteristic of low budget indie games will be super realistic graphics, because it would be way harder to create a specific art style instead of just feeding an algorithm with some stock footage.
There are already photorealistic indie games out there, but most of them are quite simple and lacks of real gameplay.
@@martiddy any examples?
I think specific art styles would work well too. Just feed the AI with pictures of different objects with that specific art style and it'll learn. Idk
SirBandicake that’s a crazy thought, and possibly quite realistic. Look at indie movies. It’s not hard to create a certain movie style or add expensive vfx, but replicating life is very easy. The same will happen with games. Amazing thought
@@BTTFRUS1 What Remains of Edith Finch is pretty damn close to photorealistic, even if somewhat stylized.
Imagine if LSD emulator used this technique, it doesnt even have to be perfect.
Actually the 1st one (that didn't retain memory) would be super weird for LSD dream emulator
But it would need to learn what an LSD trip looks like to learn and you can't just record people's live vision
@@androkon6920 Id say we are already there in some sense. Take advantage of the A.I. used to create merged animals (google deep mind pcitures animals and you will see what i mean) and just tweak it a bit. There are probably many other different A.I. machines that can create desirable looks.
When I saw this years ago, I also thought it'd be a good way to visualize a trip in a movie ruclips.net/video/-P_Sfl4e1xo/видео.html
@@androkon6920 What about using pictures from artists that depict those scenes to train?
2:30
"The shirt became white!"
Yeah, and the _dude_ became black!
B L A C K E D
This just in: A.I forced to shutdown after Black Face controversy
I was your 69 like
that poor dude
Blm must take up this issue of systemic AI bigotry
10-100 years later:
Random dev: INTRODUCING REAL LIFE
More real than reality itself!
Lmao
@@martiddy Oh yeah it's already at that point. In game water on the road looks clear and clean as there is not a particle of dust in it. And even the walls will have some kind of reflective surface.
Better Than Life.
… How do you exit this game?
No, wait - why would you want to?
@@davidwuhrer6704 you haven't, but it might be nice to get a reminder that it's a game every now and then
“computer, generate a fighting game/rhythm game-hybrid set in ireland during the potato famine, with a soundtrack by marilyn manson”
computer: PRESS X TO START
The time has come, my old childhood paint drawings, will come to life
this paper made me hold it so tightly it turned into a diamond. how is it possible that so many exciting things like this are happening so fast
Lots and lots of hardworking people are working day by day, month after month to create and innovate of course! Truly inspirational.
DIAMOND HANDS
i was thinking the other day that this could be theoretically possible. guess it already is
This is incredible! Really breaking through with some crazy stuff here - I'm imagining this being useful for making low-budget animation storyboards and possibly actual animation/CG/compositing down the line. That'll be amazing!
just wow. That's amazing. It seems that one day, perhaps video games or computer graphics will be synthesized on the fly. even if it was just textures and lighting being generated, it would provide a huge step forward
I wouldn't be surprised if artists soon- instead of becoming obsolete- will be able to create an art style to feed into an AI that it can then apply to anything they want.
Every week I wait for your new videos. I don't know but it gives me so much to look forward to in this ever changing world.
That AI is doing what I went to school to learn how to do. My skills will be obsolete before my student loans are payed off.
The age of the machines is almost upon us!
And people are still convinced there will always be jobs for humans and things AI can't do.
It's not a matter of IF an AI will take your job.
It's when.
@@KuraIthys I think that's a great prospect, assuming humans aren't so dumb as to keep clinging on our idiotic economic system, thus doing everything in their power to prevent it. We'd just have to make that one step and could live a life where we could just enjoy life and do what we want while the machines provide for us. No longre slaving away to make others richer, no longer slaving away doing some bullshit. I don't accept the fear mongering of people losing their jobs and only rich people profiting from this. That would make no sense. For the current system to work, people need to buy stuff and without them getting money it would collapse. Once we can establish total automation there would be no need for that. The machines would provide and people could do what they want.
Many people who have some sort of Art/Design degree only find jobs in the advertisement industry. They could finally do their own art instead of art meant to sell something. The technology we'd have would allow a single person to do his own personal project alone.
People could spend their free time doing more fullfilling jobs, things they'd like to do. Or just spend time with family or hobbies. Scientists wouldn't be bound by grants and could do the research they want (only restriction being the recources available, which need to be distributed fairly).
People who define themselves by doing a dumb repetitive job for 8+ hours a day, not knowing what to do with free time at their hands, could keep on rolling that stone up a hill over and over like Sisyphos. They shouldn't be the ones stopping us from gaining real freedom.
What I fear more than AI taking over are short sighted humans clinging to the status quo.
May the Science be with You
With automation, all the basic goods like food and water would basically be free. This would mean that people would consume as much as would suit/satisfy them, as there would be no price barrier holding them back. This increased consumption would lead to the earth's resources being depleted much faster than they are already being today - and it means that the automated/free industry would need to be 100% sustainable and circular right from the outset.
@@electron8262 Do you have any genuine idea how the current economy functions. We are currently living in an economy of exploitation of natural recouces and obsolescence. And there is lots of planned obsolescene. We already have overproduction. A huge percentage of the stuff we produce gets thrown into the trash because it didn't sell, without having been used even once. People won't suddenly eat more than they already do just because they'll be provided with food. Look at the US how fat many poor people are.
With better automation, production could be adapted to demand and not blind mass production. Quality products could be made that will last much longer instead of products that are literally designed to fail, so that people will buy a new one regularly.
Of cource recycling needs to be ramped up. It's our current economy that prevents recycling, because it's often cheaper to turn an entire mountain to dust to get the metals out than recycle our trash. AI assisted automation can help sort better.
People in third and second world countries destroy their habitats to make quick bucks. The rainforests are burned down by locals to grow crops. It's economically more lucrative short term to sell the teak wood than preserve nature. Why let a forest stay when you can sell it and make money. Poachers poach because rich people are paying them handsomly, compared to what they'd earn doing a normal job in their country. Without the desire to make money those people would see no reason to destroy their environment.
I've seen this paper already, and it truly is amazing! Even labeling data from an image is hard ebough, but having it be replaced back from label to something recognizable by humans is just insane.
Wow, the fact that it remembered the buildings from the start is completely mind blowing
This made me drop my papers.
Put this in VR or AR and let someone walk through their own house with "15th Century Castle Filter" applied.
Wow the left and right viewpoints are actually consistent enough to work as a makeshift stereogram. Do parallel 3d veiwing by unfocusing your eyes until the two street signs converge and then bring it into focus and you can check it out without special equipment. Very cool.
Some improvements later and I can already see video games using photo realistic worlds with this technique. Would be awesome.
I keep watching this over, and over! I'm amazed!
The new Grand Theft Auto 7 tech demo be lookin crazy! 🤯
Brilliant channel, keep it up mate!
Always very inspiring papers you talk about. Thanks! =)
We need to have The Town with No Name's graphics improved with this.
feels like that game went viral a month and a half ago, after being completely unheard of for some two decades... weird how that happens! is it chaotic social dynamics of meme culture? is it social media platform recommendation algorithms? or is it a feedback loop between both, that pulls things out of nowhere & makes them mainstream for no reason
Really unreal. I understand its truly happening, but it gets harder and harder for the intuition of what is possible to keep up with reality.
Makes me think of dreams. Our brain generates scenery without remembering previous frames. Everytime we look at a clock, the time changes...Text is mostly mixed letters. If our brains were powerful enough, our dreams would be perfectly simulated realities.
this thing has better spacial consistency than my dreams do, where a door in one house leads to another house and you find a mansion in a crawlspace.
This is MIND BLOWING! It's unbelievable how rapidly this technology is developing!
Hold on to your papers + two more papers down the line + what a time to be alive = the perfect Two Minute Papers
The morphing reminds of when you Google Translate a certain sentence 100 times and it becomes super different from what you started with
This looks pretty much like a dream, photo-realistic but you can still tell something is off about the world
One thing came to my mind while seeing this is a game that was shown long back in Pewdiepie's channel where it did not use any sort of fancy graphics but was like a recorded old hand cam video in real time and the characters would move just as if the video was playing when you interact accordingly , great thing is it did not even require any fancy hardware to play as it was one of those beginner Indie titles .. now one can imagine an AI powerful enough where it can make an interactive video where one could move the characters accordingly in any direction through deep learning and hopefully it does not exactly need a card but van play it with the same specs used to play ordinary videos on any device.
This channel is so fascinating, I love it!
I kinda like how imperfect it is, it’s basically exactly what dreams look like and i think that’s super cool
Is there a way to use this research in combination with NVIDIAs gauGAN? This really has so much creative potential, but i do not know even a bit about coding.
Nice video and very well explained. Thanks for giving us these Infos!
This channel is underrated.
Áh olyan kellemes hallgatni a Hunglish-t.
Köszönjük az izgalmas tartalmakat! :D
This is bordering on scary at this point, incredible work...
Two Minute Papers: "Segmentation map"
My programmer brain: "Segmentation fault!"
Great video. Continue the great work 🤗 With this content it will be easy for you to grow
Oh yeah, it's all coming together
One time i had a seizure and the seizure dream during it looked exactly like this, I was in a car driving home and the trees, cars, and buildings ran past me like a blur...seeing it rendered by an AI was unsettling to say the least.
Someone needs to make a channel called '2 more papers down the line' cuz I can't wait for what's to come :)
The one that recreated Pacman is unreal. We're actually in the future.
Congrats on 700K subs!
Wow... This shits amazingly insane
Vr games with this kind of refined technology would be an amazing feat
I can't wait for such technologies to be applied to photoscans. Those things usually take hundreds of photos and hours of processing, and models still end up having holes and distortions.
But here? AI can predict how new photos would look like or even easily create depthmaps from individual photos.
Was waiting for someone to do this
Next step is to make this a filter for photorealistic video game rendering. You could have a game look "almost" real with classical rendering and then apply this AI to get the last details become as close to real life as possible.
I would use a closer reference though, so the AI only had to correct small details.
@two minutes paper
Which paper was on 0:26?
May I know the source?
Your videos are so amazing and scary at the same time
Do we know if the transformation from map to image is in real time ?
So what is the use of this? Are the segmentation maps more efficient to store?
Its like a dream sequence
it's funny I just read the title and the comments without watching the video, but got an quite good understanding of what the content of the video must be. I wonder if you could build an Ai that just does that:
producing video predictions with the title and the comments as the data.
Wishing they'd use this AI on Anime series with few tweaks to make the video look like an anime movie graphics quality like that on Kimino Nawa.
0:50 like in a dream 🤔
Good call!
In dreams things change when you look away and look back.
@@Soul-Burn exactly
@@Soul-Burn That usually happens in dreams with things that we don't usually remember very well (like a random person or place you saw one day), but if it's something we remember well, then is gonna appear consistently in our dreams.
@@martiddy Yeah I have experienced lucid dreams and I can tell things like intricate patterns just cannot stay consistent in my dreams. Also clock and text on calendar seemed gibberish.
what should lamen do to make use of this information? it always intrigues me but I've no practical use for it.
(what should we learn[?])
Could we reverse to generate the depth map? With constency we should have insane result
Better than my memory!
0:50 2:16 these are what dreams look like
Had to buy some paper shredders today. Costed a lot of money.
This is completely fucking mindblowing.
Wounder if this would be better or worse for training self driving cars than the ”train on lots of different looking environments” approach.
This would probably go much further with that amazing constistent video depth maps paper for the depth map.
wait its you lol
If by photorealistic you mean oilpainted! Lol nice vid
What a time to be alive! truly
0:54 "unrealistic results"
*LSD users have left the chat*
I expect we'll see neural rendering become the future of video games, VR etc, replacing rasterization and ray tracing, etc. and it will arrive faster than we think. The real application will be AR - it's uniquely suited to that. What a time to be a live!
In lucid dreams I often notice the same breakages, If i move away from, and then back to an object, it gets recreated without consistency. Cool to see these AI dreams advancing past my own.
I used to think people were lying when talking about lucid dreaming. Then it finally happened to me and it was amazing! Only happened once in 38 years (that I remember), but I am completely jealous of people who can do it at will or just have it happen to them often.
@@Mortiis558 the old trick: look at the back of your hands and try to wake up, i'll wait... Make a habit of doing that, say 5x a day. Sounds stupid..? Until you do it while dreaming, triggering a more lucid dream-state.
real-time real-life VR games! insane...
Getting ready for infinite tsukynomi
This neural rendering looks kind of the opposite of something I was always interested. The problem would be to make a high quality picture out of multiple low quality photos across some kind of plane (like trying to do a top view picture of your neighbourhood using a drone). Is there some kind of solution to that?
This can help in improving the self driving car.
Could we transform old games with bad graphics to a better graphics now?
Question:- How can one start about designing simple applications and AIs, and slowly upgrading to works like these? What's the starting point, the journey and the beyond?
Learn Tensorflow. Watch Videos on YT. Then start very, very small like make an AI that trys to find the next number or something like that. Because Tensorflow is from Google, there are some reeaaally good tutorials. They teach you how to make an AI that says what type of clothing that is (shoe, trouser etc)
What a time to be alive!
"barely photorealistic" would be more accurate than "photorealistic" :)
What's the rendering time for the translation? Is it close to realtime?
I love watching 2MP
It would make a sick music video
4:20 that's how my dreams look like
What a time to be alive.
What a time to be alive
Imagine making a horror game look realistic real with this AI.
This is overwhelming.
the morphing is cool.
Human project is growing so fast,This is exhilarating
2020:
human program the computer
2030:
computer program the human
So you are saying that computers will finally replace television.
Perfect. Now give this AI a few images of Minecraft and it will build a realistic Minecraft hahaha
This makes me really curious as to whether you could take a game like Half-Life Alyx, record hours of regulated gameplay footage on highest graphics settings then, like in this vid, generate a semantic map version of the same gameplay or just play back that session but with graphics on potato mode. Then have a neural net like this transform the potato mode into highest gfx. In real-time, though I don't know how long until that's possible.
Perhaps this is how graphics in my brain works while dreaming. It produces very dream-like results (at least for me).
Can you imagine the police investigating a scene where they got some camera recordings of a murder? Just like in some ''unreal'' movies for our time now, they'll be able to recreate an entire scene of the crime in 4d using basic calculations to determine whether something happenned or not. And why not using Google VR orsomehting like that!! I don't know if someone already thought about it, or maybe the government intelligence agencies is already working on that, but it at least for me looks pretty obvious to be a great tool already!And by the way, it's very scary of how fast everything is evoluting!
We have material synthesis AI. dynamic Fog/fluid AI, and photorealistic generation AI. It makes me wonder if a game used all 3 How hard would it be to make very in-depth and realistic games? Could even use procedural generation techniques an have an infinite worldspace.