Following last week’s virtual GTC keynote and the announcement of their Ampere architecture, this week NVIDIA has been holding the back-half of their conference schedule. As with the real event, the company has been posting numerous sessions on everything NVIDIA, from Ampere to CUDA to remote desktop. But perhaps the most interesting talk – and certainly the most amusing – is coming from NVIDIA’s research group.

Tasked with developing future technologies and finding new uses for current technologies, today the group is announcing that they have taught a neural network Pac-Man.

And no, I don’t mean how to play Pac-Man. I mean how to be the game of Pac-Man.

The reveal, timed to coincide with the 40th anniversary of the ghost-munching game, is coming out of NVIDIA’s research into Generative Adversarial Networks (GANs). At a very high level, GANs are a type of neural network where two neural networks are trained against each other – typically one learning how to do a task and the other learning how to spot the first doing that task – with the end goal being that the competition between the networks can help make the two networks better by forcing them to improve to win. In terms of practical applications, GANs have most famously been used in research projects to create programs that can create realistic-looking images of real-world items, upscale existing images, and other image synthesis/manipulation tasks.

For Pac-Man, however, the researchers behind the fittingly named GameGAN project took things one step further, focusing on creating a GAN that can be taught how to emulate/generate a video game. This includes not only recreating the look of a game, but perhaps most importantly, the rules of a game as well. In essence, GameGAN is intended to learn how a game works by watching it, not unlike a human would.

For their first project, the GameGAN researchers settled on Pac-Man, which is as good a starting point as any. The 1980 game has relatively simple rules and graphics, and crucially for the training process, a complete game can be played in a short amount of time. As a result, over 50K “episodes” of training, the researchers taught a GAN how to be Pac-Man solely by having the neural network watch the game being played.

And most impressive of all, the crazy thing actually works.

In a video released by NVIDIA, the company is briefly showing off the Pac-Man-trained GameGAN in action. While the resulting game isn’t a pixel-perfect recreation of Pac-Man – notably, GameGAN’s simulated resolution is lower – the game none the less looks and functions like the arcade version of Pac-Man. And it’s not just for looks, either: the GameGAN version of Pac-Man accepts player input, just like the real game. In fact, while it’s not ready for public consumption quite yet, NVIDIA has already said that they want to release a publicly playable version this summer, so that everyone can see it in action.

Fittingly for a gaming-related research project, the training and development for the GameGAN was equally as silly at times. Because the network needed to consume thousands upon thousand of gameplay sessions – and NVIDIA presumably doesn’t want to pay its staff to play Pac-Man all day – the researchers relied on a Pac-Man-playing bot to automatically play the game. As a result, the AI that is GameGAN has essentially been trained in Pac-Man by another AI. And this is not without repercussions – in their presentation, the researchers have noted that because the Pac-Man bot was so good at the game, GameGAN has developed a tendency to avoid killing Pac-Man, as if it were part of the rules. Which, if nothing else, is a lot more comforting than finding out that our soon-to-be AI overlords are playing favorites.

All told, training the GameGAN for Pac-Man took a quad GV100 setup four days, over which time it monitored 50,000 gameplay sessions. Which, to put things in perspective of the amount of hardware used, 4 GV100 GPUs is 84.4 billion transistors, almost 10 million times as many transistors as are found in the original arcade game’s Z80 CPU. So while teaching a GAN how to be a Pac-Man is incredibly impressive, it is, perhaps, not an especially efficient way to execute the game.

Meanwhile, figuring out how to teach a neural network to be Pac-Man does have some practical goals to it as well. According to the research group, one big focus right now is in using this concept to more quickly train simulators, which traditionally have to be carefully constructed by humans in order to capture all of the possible interactions. If a neural network can instead learn how something behaves by watching what’s happening and what inputs are being made, this could conceivably make creating simulators far faster and easier. Interestingly, the entire concept leads to something of a self-feedback loop, as the idea is to then use those simulators to then train other neural networks how to perform a task, such as NVIDIA’s favorite goal of self-driving cars.

Ultimately, whether it leads to real-world payoffs or not, there’s something amusingly human about a neural network learning a game by observing – even (and especially) if it doesn’t always learn the desired lesson.

Source: NVIDIA

Comments Locked

20 Comments

View All Comments

  • surt - Saturday, May 23, 2020 - link

    Funny, but no, there would be no copyright issues since it's demonstrable that no copying took place.
  • ajp_anton - Friday, May 22, 2020 - link

    When actually executing the game, would this be more efficient than the original one? And what about storage space?

    This kind of game would be a hell to debug though.
  • Lord of the Bored - Saturday, May 23, 2020 - link

    No way in heck a trained neural net runs faster or in less space than a couple K of Z80 ASM.
    Especially not if it has to use modern IO. A minimal USB stack is more code, and takes more processing time, than all of Pac-Man.
  • Spunjji - Tuesday, May 26, 2020 - link

    "A minimal USB stack is more code, and takes more processing time, than all of Pac-Man"

    Doof. That really puts things in perspective!
  • Lord of the Bored - Tuesday, May 26, 2020 - link

    Yeah, Pac-man was... honestly larger than I thought. 16 kilobytes of ROM and 4K of RAM. Attached to a 3-MHz Z80.
    I'd say they're all popcorn chips today, but popcorn chips are an order of magnitude more capable.
  • Lord of the Bored - Saturday, May 23, 2020 - link

    The thing I find most amusing about this is that it failed to identify the simple algorithyms that constitute ghost behavior. The AI thinks "don't kill Pac-Man" is a game rule.
    Even in the brief snippet of play shown, the AI failed to learn that the ghosts flee Pac-Man when he is energized.
  • GeoffreyA - Sunday, May 24, 2020 - link

    Quite astounding, even if it is early first steps. Perhaps there'll come a time when we see an AI generating something like Elder Scrolls 28: Return to Valenwood ;)
  • Ej24 - Sunday, May 24, 2020 - link

    Nah 100 years from now we'll only be on elder scrolls 7.
  • anitafox - Tuesday, May 26, 2020 - link

    NVIDIA has come a very long way. They are constantly evolving. People also strive for excellence and continuous development. Try Paperell https://paperell.com/buy-term-paper. During distance learning, this is perhaps the best choice.
  • vacumcleans - Tuesday, June 9, 2020 - link

    There are vacuum cleaners of different size and plan as customary new thing thinks about the market and the model gets superseded. The vacuum cleaners were earlier with a progress pack which amassed the improvement in the wake of cleaning at any rate it was required to purchase the improvement sack again and again. https://bestvacuumcleaner.us/ Nowadays the vacuum cleaners are bagless and are sure. The vacuum cleaners go with various sizes and part so guarantee you picked the cleaner that draws in your necessities.

Log in

Don't have an account? Sign up now