What's the scale of this project in terms of processing power? That is, what hardware was used, how long did it take to generate however many images? I'm just getting started with GANs imagery, tried generating some low-res (64x64) anime faces on the basic free Google CoLab setup, took about an hour, so this landscape project seems like it would be much, much more time consuming?
Great question, I trained this project using 2 GPUs, so a little more power than Google Colab, and it took about 5 days to train. Once finished training I did all the generation on a single GPU, and it only took mere minutes for thousands of images to be generated. So unless you're looking to generate hundreds of thousands, the only time consuming part is training!
I'd also be very interested in seeing this - I know NVIDIA does theirs with PyTorch, but also interested in TF implementations which I'm more focused on
Is it possible to use this technology to not only create photographs of an AI generated world, but actually create an entire 3D plane of an AI generated world, which can be interacted through the use of created avatars. And could a 4th dimension of time be added to this as well, so that it not only creates photographs of worlds, but actually creates entire worlds that one could create an avatar and experience the world virtually. Not a computer generated world designed by a human, but an actual AI world, where everyone would be exploring it for the first time, not knowing the intricacies of it. And later on, would it be possible to populate it using a similar technique in which animals and creatures could be created via AI generated content in a similar vein? So that in a sense, everything looks real and is hard to distinguish from what is real and what is not, yet the entire creation is made from artificial intelligence only relatively based on our own real world surroundings?
Great question! So far, this is not possible - yet! With more data and computational power, as well as new innovations, adding an additional dimension would certainly be possible. One approach in the meantime might be using procedural generation (like how Minecraft generates its worlds) in combination with AI, but this too hasnt been explored at depth yet.
Great work
What's the scale of this project in terms of processing power? That is, what hardware was used, how long did it take to generate however many images? I'm just getting started with GANs imagery, tried generating some low-res (64x64) anime faces on the basic free Google CoLab setup, took about an hour, so this landscape project seems like it would be much, much more time consuming?
Great question, I trained this project using 2 GPUs, so a little more power than Google Colab, and it took about 5 days to train. Once finished training I did all the generation on a single GPU, and it only took mere minutes for thousands of images to be generated. So unless you're looking to generate hundreds of thousands, the only time consuming part is training!
@@Matchue624 OK interesting, btw, I would also be interested in seeing you do a walkthrough video of this project.
need that technical tutorial!
hi, thanks for video,
i want to learn technical implantation from you:)
especially for fashion images, like tshirts etc.
I'd also be very interested in seeing this - I know NVIDIA does theirs with PyTorch, but also interested in TF implementations which I'm more focused on
Is it possible to use this technology to not only create photographs of an AI generated world, but actually create an entire 3D plane of an AI generated world, which can be interacted through the use of created avatars. And could a 4th dimension of time be added to this as well, so that it not only creates photographs of worlds, but actually creates entire worlds that one could create an avatar and experience the world virtually. Not a computer generated world designed by a human, but an actual AI world, where everyone would be exploring it for the first time, not knowing the intricacies of it.
And later on, would it be possible to populate it using a similar technique in which animals and creatures could be created via AI generated content in a similar vein? So that in a sense, everything looks real and is hard to distinguish from what is real and what is not, yet the entire creation is made from artificial intelligence only relatively based on our own real world surroundings?
Great question! So far, this is not possible - yet! With more data and computational power, as well as new innovations, adding an additional dimension would certainly be possible. One approach in the meantime might be using procedural generation (like how Minecraft generates its worlds) in combination with AI, but this too hasnt been explored at depth yet.
count me in. i am interested to know :)