1. Neos VR
  2. News
  3. Interview with Metamovie crew, noise supression with RNNoise, desktop progress

Interview with Metamovie crew, noise supression with RNNoise, desktop progress

Hello everyone and welcome to another weekly update!

As the work is ramping up on the proper desktop support, the first fruits of this process are starting to arrive in forms of some related features. In the latest build the view and ear position can now be overridden for the user, allowing you to create VR 3rd person modes, avatars with custom neck models or for the streamer camera to render audio from its viewpoint for better footage.

We have also integrated the open source RNNoise library, which uses recurrent neural networks to filter out noise from your microphone, greatly improving the quality of everyone’s voice, while not depending on RTX cards and using very little CPU.

On this Friday we’ve also done the first of our live interview streams, with 21 questions for Jason Moore, the director of the Metamovie project and two of the actors: Nicole Rigo and Kenneth Rougaeu. If you missed the livestream, you can check it out below, it’s definitely worth a watch.

The cloud infrastructure has received some major optimizations as well, reducing the cost of the services while preserving the same functionality, ensuring that it scales even better with more users and gives us more headroom to improve functionality.



[h2]Interview with director Jason Moore and Metamovie actors[/h2]
This week on the weekly Neos Stream, Nexulan our Community Manager interviewed Jason Moore, director of Alien Rescue! Also accompanied by actress Nicole Rigo and actor Kenneth Rougaeu as he goes through a series of questions ranging from life, to how they got into showbiz and doing interactive film through VR.

If you missed the interview, the recording is certainly worth a watch as you can learn quite a lot about the history behind the Metamovie project and the people behind it and hear some really fun behind the scenes stories.

Our plan is to do an interview stream like this every month, picking some of the prominent members and creators from the community. Follow us on Twitch if you haven’t already so you don’t miss our next stream and get a chance to ask questions live!

[previewyoutube][/previewyoutube]

[h2]First steps towards proper desktop support[/h2]
Large chunk of the work is now devoted to proper desktop support. It’s still mainly in the design phase, as we look for best ways to architect the system to ensure both high flexibility and longevity by making it easy to extend and maintain as part of the codebase.

We have quite a few systems crystalizing from this design work, with a few of them implemented already. One of the primary underlying mechanisms is the ability to dynamically switch between VR and screen mode without having to restart. In the latest public build the head output system has been extended and unified to support this.

When the screen view is active, a set of subsystems will be responsible for driving the positions of the head and hands instead of the VR device as well as handling the general interactions. Those are the parts that are still being heavily designed, but we have some good concepts coming up as well.

In particular we’re aiming to make the system highly modular, allowing for hybrid combinations. For example using mouse + keyboard interactions with VR headsets or VR devices with only a single controller, with the other hand being simulated and of course the full desktop mode with everything being simulated and controlled via mouse, keyboard, gamepad or a touchscreen.

If you’d like, you can check out the latest design notes below. There’s still a lot of work to be done and most of the system isn’t set in stone yet, so you can expect more things to take shape and fall in place as we go.



However with some of the underlying bits already implemented, there are some new features that you can already play with in the latest build even in VR.

[h2]Overriding user’s view and ears positions[/h2]
One of the first fruits of the additions for the desktop mode is the new ability to override where the user sees and hears from. This is needed for the desktop, as the view direction and position is driven by the code in particular world, rather than by the physical position of the user head. With features like third person we also need to decouple where the user's head is and where they see the scene from as well.

Thanks to these changes, you can now override the position of the user's root, view and ears in the latest build, even when in VR. This has a lot of cool practical (and some less) applications. We’ve already seen users use this to create VR third person modes, with the ability to put your viewpoint behind your avatar or making avatars with separable head that can be grabbed and moved away, taking the user’s viewpoint with it.

You could also use this to experiment with avatars where the viewpoint is driven by the avatar’s IK and neck model. This might be nauseating to a lot of users as their head motions won’t match 1:1, but it is an interesting way to experiment and create very unusual avatars.

For more practical uses, the streamer camera now has the ability to render audio from its viewpoint. When streaming or recording, this will make any spatialized audio have the correct position from the viewer’s point of view, at the cost of making it more confusing for the streamer.

In the future, we’ll build more VR functionality with this mechanism as well. For example this allows you to put your viewpoint outside of your avatar when playing an animation on the character or keep the user’s view stationary in the world while their avatar walks in the environment.

We hope that you’ll have a lot of fun using these new features, building even cooler avatars and applications with them and we look forward to bringing you more cool features like this as we continue to work on the desktop mode.

[h2]Eliminating background microphone noise with RNNoise[/h2]
One of the smaller, but impactful additions this week was integration of the RNNoise library for filtering out noise from the microphone input. This open source library uses recurrent neural networks trained on a large dataset (6.4 GB) of noise data and is very effective at isolating voice even from very noisy inputs, as well as other unwanted sounds like breathing, headset rattling, keyboard noises and so on.

Since this library runs purely on CPU, doesn’t require GPU/RTX and is very fast, the new noise suppression is now on by default for all users. This will help improve the overall quality of voice for both existing and new users, making the “jet engine with a microphone” a thing of the past. As we get more desktop users in the future with all kinds of different microphones, this will hopefully prove to be very handy.

If you’d prefer to use only noise-gating or external noise elimination, you can disable the library in the settings, but we’d recommend leaving it on for most users. Even for high quality microphones it’ll eliminate any unwanted background noise and make the recording sound cleaner.

[previewyoutube][/previewyoutube]

In typical Neos fashion, we have done a full integration of this library with our engine and its audio system, making it accessible for processing audio clips as well. If you’d have an audio clip you’d like cleaned up, you can now do so through the inspector with the other audio processing options.

Since we couldn’t find any C# wrapper for this library, we have created our own and published it as open source RNNoise.NET in case you’d like to get the library integrated with your own projects.

[h2]More optimizations for the cloud infrastructure[/h2]
Over the past week we’ve also spent some time analyzing the performance and cost of our cloud infrastructure, noticing some significant inefficiencies. While still manageable, looking for ways to reduce the cost is always important, ensuring the infrastructure scales with more incoming users.

We have made several major changes to the cloud architecture to eliminate those, while maintaining the same functionality. One of the major sources of unnecessary cost was our use of the geo redundant storage system for all our data. This storage system replicates all stored data to another data center in a different part of the world.

This replication is crucial to ensure durability in case of data center disaster (like flooding or fire), making sure there is a copy elsewhere, but the replication comes at cost of both bandwidth and storage. A lot of the operational data, like upload chunks, thumbnails, processing queues and static data are either temporary or unimportant, so paying this replication isn’t worth it.

To mitigate this, we split the storage system into three parts. The original geo-replicated storage for any data that needs to be durable - primarily your assets (items, worlds), transaction data (those are also backed up regularly on top of the geo-replication) and such.

For any operational or unimportant data, we use low redundancy accounts. Those are much cheaper, but also faster, since they’re backed by SSDs instead of magnetic drives. They have higher storage cost, but much lower operation cost. Since the operational data is stored only briefly, but is heavily operated on, this is both faster and cheaper.

Some other parts of the infrastructure were optimized as well. We switched to the Microsoft CDN instead of Akamai, as it completely eliminates the cost of transferring data from the data center to the CDN (in addition for the CDN bandwidth) and gives us more statistics and control.

With a few smaller additions and adjustments, this has already dropped the cost significantly and improved the scalability further. We’ll keep monitoring the usage and make any changes, so we can ensure the cloud services keep running smoothly and that we have enough headroom to keep improving its functionality - for example by introducing a proper full text search service.

The sudden surge of users at New Year’s has more than quadrupled the load on the infrastructure and got us some good data on how it scales, what works well and what could use more optimization.

[h2]Community Highlights[/h2]
[h3]Pedal to the metal - new vehicle system by Lucasro7 and Gearbell[/h3]
Man have things been taking off! As the pedal is pressed to the meddle, users in Neos have been going wild with vehicles! We have hover cars, monocycles, motorcycles and honkocycles (with feet flaps included).

[previewyoutube][/previewyoutube]

Thanks to Lucasro7 for making the system and Gearbell for the visual/audio aesthetics. Aegis, our art director even made a full on race track with a vertical loop. The vehicles automatically stick to the ground so you can ride through the loop if you keep up the speed!

All the vehicle craze has kicked off AGRA as well, a Racing League made specifically for driving hover vehicles to compete against each other.

If you’d like to join on the fun, there is a new community Discord for vehicle enthusiasts in Neos. Happy Racing Folks!



[h3]Maze Chapter 2 by InnocentThief[/h3]
Another wonderful time of bewilderment and puzzling abound, InnocentThief has brought us a brand new puzzle map! This being the second part, it comes with some nice nature aesthetics and a more open feel as you get lost in the map, solve some puzzles and figure out what you need to survive the experience.



[h3]Winter Lake Cabin by Valagrant[/h3]
Here comes a map by Valagrant who brings a classic map that many of you might recognize from another platform! It’s the Winter Lake Cabin, a cozy little nook on the side of the lake as you look out and gander at the moon in the somber quiet night. Bring a couple friends and warm up by the fire, or hop inside and make some hot cocoa! Good Job Valagrant!



[h2]What’s Next & updated roadmap[/h2]
For the week ahead, our main focus is going to be the desktop mode. There are still more parts of the system to be designed, but we might get enough implemented for a minimal usable version, at least for being able to move around, without too many interactions. There might be more as well, but at this point it’s hard to predict.

We’ll keep the legacy debug screen mode in for as long as we can, so you can keep using it for building and testing if you already have been. Some small things will likely break (or already have) as we overhaul the systems, but generally it should still work. We’re actually using it ourselves for testing the screen mode, as it technically acts just as another VR device, allowing us to toggle between the legacy screen and the proper screen mode on the fly.

We have also updated our roadmap on the GitHub, which gives you a rough idea of what major features are coming and what state they’re in. Note that this gives just a rough state of things and the order of items on the roadmap doesn’t determine the order they’ll be implemented in. Our approach is to focus on one major feature at the time and switch focus as necessary, whether dictated by the current needs or just to shake things up.

Many of the smaller additions, tweaks and bug fixes will happen along the way as usual as well. If you’d like to see a full list, you can check out the patch notes on Steam or official Discord to see all that has been done.

We hope that you had a fun week and as always, thank you for your support! Without you this project wouldn’t be going forward and improving every day. I’d also like to add a special thanks to everyone for their kind words of support from the Yearly Update. It is not easy to talk about the difficulties, even more so in public (particularly for me), but I really appreciate everyone’s kindness and positivity.

Thank you again and see you next week!