Hitman: Absolution

Reinforcement Learning Animation with Michael Büttner

Alex J. Champandard on August 7, 2012

Some of the most mathematically challenging and computationally intensive AI techniques are actually being pioneered in the field of real-time character animation. Reinforcement learning promises to solve many of the problems associated with hand-authoring of animations. Michael Büttner details how IO Interactive applied these techniques in this speaker spotlight for the Vienna Game/AI Conference 2012, to be held on September 18th and 19th.

Tickets to the conference are available here, along with workshop tickets about procedural animation. Prices range from €96 to €192 only, with discounts for students.

References

In his interview below, Michael cites the following papers as inspiration for the research he's done at IO Interactive.

Motion Fields for Interactive Character Animation
Y. Lee, K. Wampler, G. Bernstein, J. Popović, Z. Popović
Download PDF, SIGGRAPH Asia 2010.
Near-optimal Character Animation with Continuous Control
A. Treuille, Y. Lee, Z. Popović
Download PDF, SIGGRAPH 2007.
Responsive Characters from Motion Fragments
J. McCann, N. Pollard
Download PDF, SIGGRAPH 2007.

On HITMAN: ABSOLUTION


HITMAN: ABSOLUTION is the fifth entry in the critically acclaimed franchise developed by IO Interactive. It's a stealth action title with sandbox gameplay that has historically had a key focus on artificial intelligence techniques.

This entry of the game pushes the boundaries of technology on many different fronts, including crowd simulation and character animation — among many others. Pulling this off efficiently required a strong focus on technology such as reinforcement learning.

About Michael

Michael Büttner is a veteran game developer, currently Technical Producer at IO Interactive, having previously worked as both an Lead Animation Programmer and AI Programmer for over four years. He's worked on the core AI and animation technology for the upcoming HITMAN: ABSOLUTION title, including applying reinforcement learning into the game.

He previously at game companies such as Neon Software GmbH, and as an independent game developer before it was considered cool. In between, he worked on the OpenOffice team at Sun Microsystems for 4 years.

Behind the Scenes

Q: What were the major challenges in the animation system previously used by IO Interactive that you wanted to address?

MB: We used to use a standard animation graph approach in our animation system. Each state a character can be in has to be explicitly specified, just as well as the transitions that bring us from one state to another. An animation graph is pretty much a state machine, which can look as simple as IDLE <-> MOVE.

“RL-based animation puts more freedom into the hands of animators

Of course, this requires some game code to trigger the necessary state changes. And at some point we eventually want to change the setup to use proper transition animations instead of using a simple blend. At some later point we might want to reduce the latency in this system and need to introduce interruptible transitions and transitions from transitions. Naturally, we always need to adjust the game code to adapt to the changed setup.

The consequence of this workflow is that animators heavily depend on programmers to trigger their animations whenever the setup has been changed. This requires a constant back and forth communication between the programmer and the animator which ultimately hurts iteration times and just makes it unnecessarily hard to get to the desired quality level. The question is, of course firstly, how to improve the workflow and put more freedom into the hands of animators and secondly, how to raise the overall visual fidelity of the game at the end of the day. This is the challenge I tried to address with the RL-based animation system.

Q: Reinforcement learning has become increasingly popular in academic research into animation controllers. Why is RL a promising technique in this area?

Wouldn’t it be nice to be able to directly feed high-level goals, like “Go to this location” to the animation system, which then automatically and autonomously decides about the best sequence of animations to play in order to achieve the goal?

Reinforcement Learning can take such an objective and turn this into information a real-time controller can use to select animations. The most remarkable property is that this approach can properly deal with delayed gratification. For example, even when no single motion fragment immediately can reach the goal and they need to be chained together, RL can provide us with the globally best sequence of actions, i.e. motion fragments.

Q: Some reinforcement learning techniques work on motion graphs at a very high-level, while others take low-level frame deltas and apply RL there. Which research in particular was your work based on?

Last year, I stumbled across Motion Fields for Interactive Character Animation and got hooked on the idea of using Reinforcement Learning to create controllers that can select sequences of motion clips to achieve some goal. Since then I pretty much tried to read as much as I could find about RL, where I found Reinforcement Learning: An Introduction to be a gentle but still very thorough introduction to the subject. Searching for further applications of RL in the area of Character Animation revealed interesting papers such as Near-optimal Character Animation with Continuous Control and Responsive Characters from Motion Fragments — which both heavily influenced my work.


Find out more about the character animation and reinforcement learning in HITMAN: ABSOLUTION, and whether it's a good fit for your game engine at the Vienna Game/AI Conference 2012. Tickets are now available online! Click here to secure your seat...

Posted in Schedule