AI recreates game engine using less than two minutes of videogame footage

Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.

In layman’s terms, the new system can replicate the ‘game engine,’ which dictates everything from character movement to rendering graphics, creating a cloned version that is indistinguishable from the original when played.

The Georgia Tech team’s AI can learn how a video game operates just by watching two minutes of gameplay. On right, the AI replicates Mega Man in the ‘Bomberman’ stage. There were some failures, including a point at which he disappears. The original is shown on the left.

Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.

To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”

“Our AI creates the predictive model without ever accessing the game’s code, and makes significantly more accurate future event predictions than those of convolutional neural networks,” says Matthew Guzdial, lead researcher and Ph.D. student in computer science. “A single video won’t produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that’s pretty close.”

They next tested how well the cloned engine would perform in actual gameplay. They employed a second AI to play the game level and ensure the game’s protagonist wouldn’t fall through solid floors or go undamaged if hit by an enemy. The results: the AI playing with the cloned engine proved indistinguishable compared to an AI playing the original game engine.

A section of gameplay video (left) is produced by the original Super Mario Bros. engine, and the cloned engine (right) demonstrates the ability to accurately predict animation states.

‘A single video won’t produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that’s pretty close.’ The game engine created with their system was more similar to the original than the same test done on a neural network, according to the researchers.

 

 

(source: GaTech blog, dailymail)

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s