Elliot Gardner 

Are AI-generated video games really on the horizon?

Microsoft and Google have both recently released new generative AI models that simulate video game worlds – with notable limitations. What can they do?
  
  

Ninja Theory’s game Bleeding Edge, used to develop Microsoft’s gaming AI tool Muse
Ninja Theory’s game Bleeding Edge, used to develop Microsoft’s gaming AI tool Muse. Photograph: Microsoft

Another month, another revolutionary generative AI development that will apparently fundamentally alter how an entire industry operates. This time tech giant Microsoft has created a “gameplay ideation” tool, Muse, which it calls the world’s first Wham, or World and Human Action Model. Microsoft claims that Muse will speed up the lengthy and expensive process of game development by allowing designers to play around with AI-generated gameplay videos to see what works.

Muse is trained on gameplay data from UK studio Ninja Theory’s game Bleeding Edge. It has absorbed tens of thousands of hours of people’s real gameplay, both footage and controller inputs. It can now generate accurate-looking mock gameplay clips for that game, which can be edited and adapted with prompts.

All well and good, but in an announcement video for Muse, Microsoft Gaming CEO Phil Spencer caused confusion when he said that it could be invaluable for the preservation of classic games: AI models, he implied, could “learn” those games and emulate them on modern hardware. It’s not clear how this would be possible. Further muddying the waters, Microsoft’s overall CEO Satya Nadella then implied in a podcast interview that Muse was the first step in creating a “catalogue” of AI-generated games.

But Muse, as it stands, can’t create a game – it can only create made-up footage of a game. So just what is this new gaming AI tool? A swish addition to game developers’ tool belts? Or the first step towards an era of AI-generated gaming detritus?

The idea is that designers (or indeed players) can try out ideas with Muse without spending hours (or days) in a gameplay engine implementing something that might not feel good or even work. If a designer wants to see what, say, a power-up would look like in-game, they could generate mock video showing what that might look like, with the AI filling in the gaps.

“Game engines are complicated, messy things and it takes a lot of time to simulate things – they’re not built for that,” comments Julian Tongelius, associate professor in computer science and engineering at the New York University Tandon School of Engineering. “[Working with] a simulation of the game can be much easier and faster. The opportunities opened up by this kind of study are pretty big, but the limitations are also real.”

AI gameplay simulations aren’t totally new – Google’s GameNGen project created a playable version of Doom that ran without a game engine in 2024. But the problem has always been consistency. Google’s Doom simulation worked well at first, but the longer you played, the more the AI would “dream up” game elements that weren’t accurate. This is what Microsoft claims to have solved with Muse, but it comes with a massive caveat.

“This particular model is trained on 500,000 game sessions, so likely around 100,000 hours of gameplay. But it only works because you have so much data. If you move far beyond what’s been recorded, simulations generally stop behaving well,” explains Tongelius.

Microsoft has stated that it is already using Muse to develop real-time playable AI models trained on its other first-party games. But while Muse is great for live-service games such as Bleeding Edge, with access to thousands of hours of live gameplay, for smaller games and single-player titles, it would be a monumental and probably pointless effort to train a generative AI model in each and every specific game.

“It’s an amazing technical hurdle that they’ve jumped, but it kind of feels like they’re going through their Zoom moment: a product coming into a market that doesn’t really have a purpose,” says Ken Noland, the veteran game designer and self-described AI realist who runs AI Guys, an AI-focused co-development company. “The technology is cool, and don’t get me wrong, video generation is not an easy thing to do … I just don’t see its target audience. Game developers won’t be able to use it for rapid production because it doesn’t actually, aside from visualising a particular thing, address any underlying game development issues.”

Ultimately, there appears to be a disconnect between Spencer and Nadella’s comments and what Muse actually does at the moment. Unless something significant changes, it doesn’t appear capable of creating playable simulations of classic games, and it certainly doesn’t create entirely “new” AI-generated games. It isn’t even clear how Muse’s generated videos could be translated into actual gameplay.

AI-generated video games may yet be on the horizon. Google quietly released Genie 2 a few months back, which is capable of generating “playable worlds” – but that’s not what Muse does, at least for now. “I will choose to graciously interpret what Satya said as visions of what could be done in the future,” says Tongelius. “It’s entirely possible that we will get to some version of that, but it’s not around the corner. What Microsoft has done in this paper is a foundation stone.”

 

Leave a Comment

Required fields are marked *

*

*