it
Request

Automate the way computer opponents drive in a motorcycle racing videogame to create a challenge and behavior level that is as close as possible to that of a human opponent.

Starting point

Videogame development requires hours spent programming every last detail, from the virtual environment to the simulation of the behavior of motorcycle riders on the racetrack.
Those activities are carried out by a team of expert developers that program every single action carried out by the contestants from scratch, with lengthy development times. In addition, the results do not always meet the expectations of end users in terms of realistic racing behaviors.

Solution implemented

The introduction of artificial intelligence methods completely changed the way the client approached development: programmers no longer implement pre-established behaviors, but rather provide the AI with a view of the surrounding ‘world’, a goal and the actions to use to reach that goal. Through a sophisticated reward mechanism, the IA system is told which behaviors are useful to reaching the goal and which, on the other hand, are damaging. It will then be up to the AI software to learn the best way to reach the pre-established goal.
This approach is called ‘reinforcement learning’ and it makes it possible to create AI that detects the environment around it and the degree to which the actions carried out are useful or counterproductive. In this way, the game strategy emerges from the interaction of the AI software with the environment without having to be programmed from scratch.

Custom infrastructure, put in place to manage the training phase, made it possible to simulate an incredibly high number of races in very little time (circa 200,000 races in a single working day, more than a professional driver can complete in a lifetime). During the training phase, the AI program analyzes whether the consequences of each of its actions are positive or negative in relation to obtaining its goal (the exploration phase), and subsequently takes advantage of its experience and accumulated knowledge to get the best reward possible (exploitation phase).

PyTorch was used to implement the neural network, an open-source deep learning framework developed by the Facebook AI Research group (FAIR). The model implemented was Actor-Critic, characterized by two distinct networks that interact with each other. The Actor network establishes the actions to carry out in a certain state.
The external environment (in this case, the racetrack, the weather, the motorcycle characteristics) is detected and measured via input sensors connected to the Actor network, which processes them as part of its evaluation of the state. The context estimate and any compensations are sent to the agent, who completes an action. The Critic network evaluates the consequences of that action, modifying the state.

The goal of the interaction between the two networks is to maximize the rewards obtained for the different actions taken over time.

Results

Our solution makes it possible for players to be truly challenged, with smarter, faster opponents that take advantage of every single error committed by the player. The result is extremely realistic and natural racing behavior, in which the maneuvers and strategies are very similar to those of a professional motorcyclist.

Even in terms of performance, the AI’s on-track behavior is close to that of human divers, with lap times that are much lower compared to a traditionally programmed system. Even the behavior of groups is more aggressive but accurate in relation to others on the racetrack.

Current developments

Multiple in-house projects are underway related to the use of AI methods to optimize other components linked to videogames.
In particular, we’re working on a solution to increase the realism of driver behavior, both on the motorcycle and when falling, through a motion-synthesis approach guided by neural networks and based on the analysis of existing videos.