1. Ready or Not
  2. News
  3. Vol.71 - Ready or Not Development Briefing

Vol.71 - Ready or Not Development Briefing

Attention Officers,


Thank you for joining us for the 71st edition of our development briefing, March 29th, 2024!

This week we’ll look at our new dynamic dialogue-related systems for AI which will give us more flexibility and nuance for storytelling, as well as showcase some of the recent work on our lip sync system.

These development briefings serve to keep you in the loop about parts of our ongoing support for Ready or Not, although they do not encompass everything that we’re working on at a given moment. Please keep in mind that everything in this development briefing is work in progress and subject to change.

Vol.71 Development Briefing Summary Points: (not a changelog)
  • Dynamic Dialogue System
    • Allows for easily creatable dialogue strings between multiple characters
    • Intricate level of customization
    • Complements existing reaction voice line system
    • New dialogue being written and recorded
  • Lip Sync System
    • Video showcase of the recent early implementation of our lip sync system

[h2]Dynamic Dialogue [/h2]

When you and your SWAT AI officers are moving through missions you may have noticed their penchant for verbally reacting to their environments. With a few notable exceptions, these reactions are generally single one-liner style voice lines prompted by moving through invisible spaces that we call “reaction volumes.”

What we are working on now is a supplemental new, more streamlined and nuanced dialogue system that will allow for easily creatable dynamic dialogue strings between characters on a large scale. These interactions can be triggered by invisible spaces called “dialogue volumes.”

A great beauty of this new system is the extreme ease and depth with which the dialogue parameters can be customized by any developer working on a level.

How and when dialogue is triggered, the number of participants or voice lines involved, how dialogue audio is emitted, and granular elements of dialogue content are all among the variables we can adjust.

For the dialogue volumes themselves we can tweak the trigger delay, which specific characters can trigger the volume (i.e. SWAT, the player, civilians, suspects), and which characters must be present to trigger a particular dialogue.

To avoid repetition, we can also specify whether a dialogue is supposed to be repeated as well as give certain dialogue strings or lines a random chance of occurring in the first place.

The dialogue voice line emitter can be attached to practically anything, which opens up a lot of avenues. This also means that static environmental characters like those in the Station can easily have dialogue as well.

(Image below: Examples of some static character scenes around the Station that may benefit from this system)

The content of the lines for each participant in a dialogue string can be set to play a specific voice line, or a randomized voice line that corresponds to a given situation. The given order of each voice line and which character says it are also conveniently specifiable. If a character isn’t present or required for a dialogue, then their line can be specifically left out from the dialogue string.

We have more dialogue on the way being written and recorded that should be able to plug right into this setup too.

All together, these feature configurations harmonize to create a dynamic character-rich atmosphere that complements our existing reaction voice line system.


[h2]In Sync [/h2]

The lip sync animation system that was mentioned in our previous development briefing has undergone a facelift during implementation which we're ready to show off.

After plenty of tweaking to get out of the prototype phase, the lip movements generated by our system are progressing nicely and pairing with some additional facial animations.

Still, the feature is in its early stages, pending work on improved animation integration for head and eye tracking movements relative to in-game objects. In its completed state it will serve as a nice level of polish to our dialogue system and voice lines.

(Video below: SWAT officer model making callouts with the WIP lip sync system, utilizing alert and angry voice lines)

[previewyoutube][/previewyoutube]

Conclusion


As you can see, our longer-term AI enhancements are not just limited to behavior but also cover improved storytelling and immersion across the game.

These new systems are potentially applicable to both current content and future content.

This concludes our 71st development briefing. Be sure to tune in next time for more development news!

Special thanks to Zack for his photos and brilliant work on the new dialogue system, and to Alex and killowatt for their work and footage on this high fidelity lip sync system.



Make sure you follow Ready or Not on Steam here.
Our other links: Discord, X, YouTube, Instagram, Facebook.

[h3]Stack up and clear out.
VOID Interactive [/h3]