Did AI make its own game whose simulated environment it generates on the fly?

C C Offline
Is it more coherent slash lawful than the dreams of a brain, or worse? Anyway, a kind of computational solipsism may tentatively be on the horizon.
- - - - - -

An AI just made a Grand Theft Auto game, and it looks wild

EXCERPT: . . . Last month, an Intel Labs undertaking brought machine learning techniques to Grand Theft Auto V, transforming its visuals into an extremely realistic version of itself. Like Imogen said, it's a little like watching a dashcam. And now, in a second interesting yet ultimately useless project, we can see what Grand Theft Auto V would look like if an artificial intelligence made it. Harrison Kinsley, also known as sentdex on YouTube and the author of Neural Networks from Scratch, uploaded a video of GAN Theft Auto, the version of the game created by Kinsley and fellow programmer Daniel Kukieła's GameGAN.

GAN stands for generative adversarial network and within it, it contains two competing neural networks. The first is the generator: it's got the job of consuming a sample dataset and then generating content as per the data in the sample. The second is the discriminator: it takes the generated content and compares it against the original sample, subtracting what is too different so that the generator produces content that is more and more accurate to the sample.

The result, while fascinating, does look like you're in the throes of one hell of a migraine. If you're fortunate enough to have never been afflicted by one, this is what all of the fuss is about. "Every pixel you see here is generated from a neural network while I play," explained Kinsley in the video. "The neural network is the entire game. There are no rules written here by us or the [RAGE] engine." It's even able to reproduce how distance makes the mountains larger or smaller depending on how far away the car is from them, which was surprising to the team. It's playable through this link, if you want to give it a whirl... (MORE - details)

stryder Offline
It's interesting to view. It could be greatly improved if the assets of GTA are trained by the system independently. (For instance a road bollard being rotated so it's dimensions can be seen, perhaps phasing through different dynamic light so it can understand the difference in how it looks through reflection or diffusion. If don't properly it should be able to identify the light source and shade appropiately, furthermore if doing such a model actually purposely avoiding the actual light consistancy would be a test to see if the GAN can actually fill in the right level of lighting.)

Kind of reminds me of how an artificial "Dream" environment would consist, playing GTA in this instance is being "Lucid".

Furthermore there is also the consideration on how babies develop their brain in regards to visual recognition. There's a likelihood that to begin with the visuals they see wouldn't be too different from how the GAN percieves it's surroundings, objects would be obscure, they might not be viewable based upon depth perception, and peoples faces could seem dialated. (I actually pose it's part of the reason for people that develop Coulrophobia [The fear of clowns] as peoples faces especially with makeup could look obscure and even scarey.)

Users browsing this thread: 1 Guest(s)