Virtual reality

Tech Secrets Behind Cosmonious High’s Interactive VR Characters

Tech Secrets Behind Cosmonious High's Interactive VR Characters
Written by admin

Cosmonious High contains 18 characters across six species all created by a team with zero dedicated animators. That means lots and lots of code to create realistic behaviors and Owlchemy-quality interactivity! The ‘character system’ in Cosmonious High is a group of around 150 scripts that together answer many design and animation problems related to characters. Whether it’s how they move around, look at things, interact with objects, or react to the player, it’s all highly modular and almost completely procedural.

This modularity enabled a team of content designers to create and animate every single line of dialogue in the game, and for the characters to feel alive and engaging even when they weren’t in the middle of a conversation. Here’s how it works.

Guest Article by Sean Flanagan & Emma Atkinson

Cosmonious High is a game from veteran VR studio Owlchemy Labs about attending an alien high school that’s definitely completely free of malfunctions! Sean Flanagan, one of Owlchemy’s Technical Artists, created Cosmonious High’s core character system among many other endeavors. Emma Atkinson is part of the Content Engineering team, collectively responsible for implementing every narrative sequence you see and hear throughout the game.

The Code Side

Almost all code in the character system is reusable and shared between all the species. The characters in Cosmonious High are a bit like modular puppets—built with many of the same parts underneath, but with unique art and content on top that individualizes them.

From the very top, the character system code can be broken down into mods and drivers.

Modules

Every character in Cosmonious High gets its behavior from its set of character modules. Each character module is responsible for a specific domain of problems, like moving or talking. In code, this means that each type of Character is defined by the modules we assign to it. Characters are not required to implement each module in the same way, or at all (eg the Intercom can’t wave.)

Some of our most frequently used modules were:

CharacterLocomotion – Responsible for locomotion. It specifies the high-level locomotion behavior common to all characters. The actual movement comes from each implementation. All of the ‘grounded’ characters—the Bipid and Flan—use CharacterNavLocomotion, which moves them around on the scene Nav Mesh.

CharacterPersonality – Responsible for how characters react to the player. This module has one foot in content design—its main responsibility is housing the responses characters have when players wave at them, along with any conversation options. It also houses a few ‘auto’ responses common across the cast, like auto receive (catching anything you throw) and auto gaze (returning eye contact).

CharacterEmotion – Keeps track of the character’s current emotion. Other components can add and remove emotion requests from an internal stack.

Character Vision – Keeps track of the character’s current vision target(s). Other components can add and remove vision requests from an internal stack.

CharacterSpeech – How characters talk. This module interfaces with Seret, our internal dialogue tool, directly to queue and play VO audio clips, including any associated captions. It exposes a few events for VO playback, interruption, completion, etc.

It’s important to note that animation is a separate concern. The Emotion module doesn’t make a character smile, and the Vision module doesn’t turn a character’s head—they just store the character’s current emotion and vision targets. Animation scripts reference these modules and are responsible for transforming their data into a visible performance.

drivers

The modules that a character uses collectively outline what that character can do, and can even implement that behavior if it is universal enough (such as Speech and Personality.) However, the majority of character behavior is not captureable at such a high level. The dirty work gets handed off to other scripts—collectively known as drivers—which forms the real ‘meat’ of the character system.

Despite their more limited focus, drivers are still written to be as reusable as possible. Some of the most important drivers—like CharacterHead and CharacterLimb—invisibly represent some part of a character in a way that is separate from any specific character type. When you grab a character’s head with Telekinesis, have a character throw something, or tell a character to play a mocap clip, those two scripts are doing the actual work of moving and rotating every frame as needed.

Drivers can be loosely divided into logic drivers and animated drivers.

Logic drivers are like head and limb—they don’t do anything visible themselves, but they capture and perform some reusable part of character behavior and expose any important info. Animation drivers reference logic drivers and use their data to create character animation—moving bones, swapping meshes, solving IK, etc.

Animation drivers also tend to be more specific to each character type. For instance, everyone with eyes uses a few instances of CharacterEye (a logic driver), but a Bipid actually animates their eye shader with BipedAnimationEyes, a Flan with FlanAnimationEyes, etc. Splitting the job of ‘an eye’ into two parts like this allows for unique animation per species that is all backed by the same logic.

Continue on Page 2: The Content Side »

About the author

admin

Leave a Comment