1. Neos VR
  2. News

Neos VR News

Interview with Metamovie crew, noise supression with RNNoise, desktop progress

Hello everyone and welcome to another weekly update!

As the work is ramping up on the proper desktop support, the first fruits of this process are starting to arrive in forms of some related features. In the latest build the view and ear position can now be overridden for the user, allowing you to create VR 3rd person modes, avatars with custom neck models or for the streamer camera to render audio from its viewpoint for better footage.

We have also integrated the open source RNNoise library, which uses recurrent neural networks to filter out noise from your microphone, greatly improving the quality of everyone’s voice, while not depending on RTX cards and using very little CPU.

On this Friday we’ve also done the first of our live interview streams, with 21 questions for Jason Moore, the director of the Metamovie project and two of the actors: Nicole Rigo and Kenneth Rougaeu. If you missed the livestream, you can check it out below, it’s definitely worth a watch.

The cloud infrastructure has received some major optimizations as well, reducing the cost of the services while preserving the same functionality, ensuring that it scales even better with more users and gives us more headroom to improve functionality.



[h2]Interview with director Jason Moore and Metamovie actors[/h2]
This week on the weekly Neos Stream, Nexulan our Community Manager interviewed Jason Moore, director of Alien Rescue! Also accompanied by actress Nicole Rigo and actor Kenneth Rougaeu as he goes through a series of questions ranging from life, to how they got into showbiz and doing interactive film through VR.

If you missed the interview, the recording is certainly worth a watch as you can learn quite a lot about the history behind the Metamovie project and the people behind it and hear some really fun behind the scenes stories.

Our plan is to do an interview stream like this every month, picking some of the prominent members and creators from the community. Follow us on Twitch if you haven’t already so you don’t miss our next stream and get a chance to ask questions live!

[previewyoutube][/previewyoutube]

[h2]First steps towards proper desktop support[/h2]
Large chunk of the work is now devoted to proper desktop support. It’s still mainly in the design phase, as we look for best ways to architect the system to ensure both high flexibility and longevity by making it easy to extend and maintain as part of the codebase.

We have quite a few systems crystalizing from this design work, with a few of them implemented already. One of the primary underlying mechanisms is the ability to dynamically switch between VR and screen mode without having to restart. In the latest public build the head output system has been extended and unified to support this.

When the screen view is active, a set of subsystems will be responsible for driving the positions of the head and hands instead of the VR device as well as handling the general interactions. Those are the parts that are still being heavily designed, but we have some good concepts coming up as well.

In particular we’re aiming to make the system highly modular, allowing for hybrid combinations. For example using mouse + keyboard interactions with VR headsets or VR devices with only a single controller, with the other hand being simulated and of course the full desktop mode with everything being simulated and controlled via mouse, keyboard, gamepad or a touchscreen.

If you’d like, you can check out the latest design notes below. There’s still a lot of work to be done and most of the system isn’t set in stone yet, so you can expect more things to take shape and fall in place as we go.



However with some of the underlying bits already implemented, there are some new features that you can already play with in the latest build even in VR.

[h2]Overriding user’s view and ears positions[/h2]
One of the first fruits of the additions for the desktop mode is the new ability to override where the user sees and hears from. This is needed for the desktop, as the view direction and position is driven by the code in particular world, rather than by the physical position of the user head. With features like third person we also need to decouple where the user's head is and where they see the scene from as well.

Thanks to these changes, you can now override the position of the user's root, view and ears in the latest build, even when in VR. This has a lot of cool practical (and some less) applications. We’ve already seen users use this to create VR third person modes, with the ability to put your viewpoint behind your avatar or making avatars with separable head that can be grabbed and moved away, taking the user’s viewpoint with it.

You could also use this to experiment with avatars where the viewpoint is driven by the avatar’s IK and neck model. This might be nauseating to a lot of users as their head motions won’t match 1:1, but it is an interesting way to experiment and create very unusual avatars.

For more practical uses, the streamer camera now has the ability to render audio from its viewpoint. When streaming or recording, this will make any spatialized audio have the correct position from the viewer’s point of view, at the cost of making it more confusing for the streamer.

In the future, we’ll build more VR functionality with this mechanism as well. For example this allows you to put your viewpoint outside of your avatar when playing an animation on the character or keep the user’s view stationary in the world while their avatar walks in the environment.

We hope that you’ll have a lot of fun using these new features, building even cooler avatars and applications with them and we look forward to bringing you more cool features like this as we continue to work on the desktop mode.

[h2]Eliminating background microphone noise with RNNoise[/h2]
One of the smaller, but impactful additions this week was integration of the RNNoise library for filtering out noise from the microphone input. This open source library uses recurrent neural networks trained on a large dataset (6.4 GB) of noise data and is very effective at isolating voice even from very noisy inputs, as well as other unwanted sounds like breathing, headset rattling, keyboard noises and so on.

Since this library runs purely on CPU, doesn’t require GPU/RTX and is very fast, the new noise suppression is now on by default for all users. This will help improve the overall quality of voice for both existing and new users, making the “jet engine with a microphone” a thing of the past. As we get more desktop users in the future with all kinds of different microphones, this will hopefully prove to be very handy.

If you’d prefer to use only noise-gating or external noise elimination, you can disable the library in the settings, but we’d recommend leaving it on for most users. Even for high quality microphones it’ll eliminate any unwanted background noise and make the recording sound cleaner.

[previewyoutube][/previewyoutube]

In typical Neos fashion, we have done a full integration of this library with our engine and its audio system, making it accessible for processing audio clips as well. If you’d have an audio clip you’d like cleaned up, you can now do so through the inspector with the other audio processing options.

Since we couldn’t find any C# wrapper for this library, we have created our own and published it as open source RNNoise.NET in case you’d like to get the library integrated with your own projects.

[h2]More optimizations for the cloud infrastructure[/h2]
Over the past week we’ve also spent some time analyzing the performance and cost of our cloud infrastructure, noticing some significant inefficiencies. While still manageable, looking for ways to reduce the cost is always important, ensuring the infrastructure scales with more incoming users.

We have made several major changes to the cloud architecture to eliminate those, while maintaining the same functionality. One of the major sources of unnecessary cost was our use of the geo redundant storage system for all our data. This storage system replicates all stored data to another data center in a different part of the world.

This replication is crucial to ensure durability in case of data center disaster (like flooding or fire), making sure there is a copy elsewhere, but the replication comes at cost of both bandwidth and storage. A lot of the operational data, like upload chunks, thumbnails, processing queues and static data are either temporary or unimportant, so paying this replication isn’t worth it.

To mitigate this, we split the storage system into three parts. The original geo-replicated storage for any data that needs to be durable - primarily your assets (items, worlds), transaction data (those are also backed up regularly on top of the geo-replication) and such.

For any operational or unimportant data, we use low redundancy accounts. Those are much cheaper, but also faster, since they’re backed by SSDs instead of magnetic drives. They have higher storage cost, but much lower operation cost. Since the operational data is stored only briefly, but is heavily operated on, this is both faster and cheaper.

Some other parts of the infrastructure were optimized as well. We switched to the Microsoft CDN instead of Akamai, as it completely eliminates the cost of transferring data from the data center to the CDN (in addition for the CDN bandwidth) and gives us more statistics and control.

With a few smaller additions and adjustments, this has already dropped the cost significantly and improved the scalability further. We’ll keep monitoring the usage and make any changes, so we can ensure the cloud services keep running smoothly and that we have enough headroom to keep improving its functionality - for example by introducing a proper full text search service.

The sudden surge of users at New Year’s has more than quadrupled the load on the infrastructure and got us some good data on how it scales, what works well and what could use more optimization.

[h2]Community Highlights[/h2]
[h3]Pedal to the metal - new vehicle system by Lucasro7 and Gearbell[/h3]
Man have things been taking off! As the pedal is pressed to the meddle, users in Neos have been going wild with vehicles! We have hover cars, monocycles, motorcycles and honkocycles (with feet flaps included).

[previewyoutube][/previewyoutube]

Thanks to Lucasro7 for making the system and Gearbell for the visual/audio aesthetics. Aegis, our art director even made a full on race track with a vertical loop. The vehicles automatically stick to the ground so you can ride through the loop if you keep up the speed!

All the vehicle craze has kicked off AGRA as well, a Racing League made specifically for driving hover vehicles to compete against each other.

If you’d like to join on the fun, there is a new community Discord for vehicle enthusiasts in Neos. Happy Racing Folks!



[h3]Maze Chapter 2 by InnocentThief[/h3]
Another wonderful time of bewilderment and puzzling abound, InnocentThief has brought us a brand new puzzle map! This being the second part, it comes with some nice nature aesthetics and a more open feel as you get lost in the map, solve some puzzles and figure out what you need to survive the experience.



[h3]Winter Lake Cabin by Valagrant[/h3]
Here comes a map by Valagrant who brings a classic map that many of you might recognize from another platform! It’s the Winter Lake Cabin, a cozy little nook on the side of the lake as you look out and gander at the moon in the somber quiet night. Bring a couple friends and warm up by the fire, or hop inside and make some hot cocoa! Good Job Valagrant!



[h2]What’s Next & updated roadmap[/h2]
For the week ahead, our main focus is going to be the desktop mode. There are still more parts of the system to be designed, but we might get enough implemented for a minimal usable version, at least for being able to move around, without too many interactions. There might be more as well, but at this point it’s hard to predict.

We’ll keep the legacy debug screen mode in for as long as we can, so you can keep using it for building and testing if you already have been. Some small things will likely break (or already have) as we overhaul the systems, but generally it should still work. We’re actually using it ourselves for testing the screen mode, as it technically acts just as another VR device, allowing us to toggle between the legacy screen and the proper screen mode on the fly.

We have also updated our roadmap on the GitHub, which gives you a rough idea of what major features are coming and what state they’re in. Note that this gives just a rough state of things and the order of items on the roadmap doesn’t determine the order they’ll be implemented in. Our approach is to focus on one major feature at the time and switch focus as necessary, whether dictated by the current needs or just to shake things up.

Many of the smaller additions, tweaks and bug fixes will happen along the way as usual as well. If you’d like to see a full list, you can check out the patch notes on Steam or official Discord to see all that has been done.

We hope that you had a fun week and as always, thank you for your support! Without you this project wouldn’t be going forward and improving every day. I’d also like to add a special thanks to everyone for their kind words of support from the Yearly Update. It is not easy to talk about the difficulties, even more so in public (particularly for me), but I really appreciate everyone’s kindness and positivity.

Thank you again and see you next week!

2021.1.17.998 - View/listener transform override, cloud optimizations and more

With the work starting to ramp up on the desktop mode, here are some first related fruits implemented thanks to underlying architectural changes and additions, particularly ability to override rendering root, view and ears (listener) transforms!

There are two main features built on top of this right now - one for avatars, giving you full control to create custom setups, override where the user "sees" and "hears" from or putting them out of the actual avatar body and easy ability to render all audio from viewpoint of the streamer camera, making the footage audio make more sense for viewers for any spatialized sources.

Another major thing that's more internal are some cloud infrastructure reorganizations and optimizations, mainly to reduce some inefficiencies to save costs and improve performance and scalability. There's some more to come as well as I continue to monitor the impact of the changes.

A bunch of other smaller additions, tweaks and bugfixes as well. There's bits of the proper desktop mode too, nothing usable yet though, but it's starting to take form. More to come soon!

[h2]New Features:[/h2]
- Added a mechanism to override the transform of user's root, view and "ears" for audio-visual output, while preserving their actual UserRoot in the world by assigning Slot references on the UserRoot component
-- Note that view should be used sparingly for VR users (if at all) as it will negate any actual head movement which will be dizzying to a lot of users. If you want to create "out of body" experience, override the root instead.
-- The view override can be put on the avatar's head so the view still moves with the head, but using the avatar's motions, but be careful with this as well, as the motion not being 1:1 can cause nausea as well. I STRONGLY discourage using this as standard feature of avatars for adjusting the view (even if subtle) and only recommend it for gimmick avatars and personal use
- Added AvatarUserRootOverrideAssigner (under Users/Common Avatar System) which allows overriding the root, view and ears from equipped avatar (requested by @GearBell, @Abysmal, @ProbablePrime, @Shifty | Quality Control Lead and @Hayden)
-- The component updates in realtime. You can set the Override target to null to disable or change it to a target object to dynamically assign a new target
- Added "Audio from camera viewpoint" setting to Interactive Camera dialog, which will render the audio from the camera's viewpoint (requested by @Nexulan | Community Manager and many users before (sorry I didn't keep a list! ;_;))
-- This can be useful for streams for spatialized audio to make it less confusing for viewers, at cost of making it more confusing for the streamer
- Added LookLeft, LookUp, LookRight and LookDown to EyeLinearDriver, which allows driving 0...1 directional values (e.g. blendshapes) based on where the eye is currently looking (based on request by @Groxxy the Eye-Puppeteer)
-- This can be used to drive extra blendshapes that morph the eye when it looks in particular directions
-- The response of the values can be controlled using the LookMultiply and LookPower fields
- Added "Extract Sides" audio processing option to the inspector, which removes the center channel from stereo/quad tracks
-- This can be used to remove vocals from music tracks which are typically centered in mono
- Added "Prefer Specular" to advanced model import settings, which will prefer the Specular variants of PBS materials when importing (requested by @Robyn (QueenHidi))
- Added Rect text input node (x, y, width, height)
-- This now also makes it possible to extract driver node or direct inputs for Rect values (based on report by @chemicalcrux)

[h2]Work-in-progress features:[/h2]
- Expanded Neos' input/output system to support two toggleable rendering outputs - VR and Screen, using "VR_Active" state that can be changed at any time (for testing purposes it's currently bound to the F8 key, but it's not in usable state yet)
- Unified Neos' head output system to remove duplicates and remove any deprecated versions to simplify the code and management
- Added "VR Active" state to user and corresponding node, which determines if the user is currently in VR mode or screen mode
-- Note that the legacy screen mode is treated as "VR" mode. This info will be useful as the proper desktop mode is fully implemented

[h2]Tweaks:[/h2]
- Redesigned part of cloud storage system architecture to utilize multiple differently configured storage services to reduce cost and optimize performance
-- Any operational data (file upload chunks, asset variant computations, thumbnails, status updates and related queue/table objects and their work items, generally anything that's ok if it's lost) is now using low-redundancy, low-latency and low-operation cost (backed by SSD's) storage systems to prevent extra costs with geo-redundant storage
-- Any permanently saved data (assets, records) are still using geo-redundant storage, to ensure durability even in case of data center disasters
- Reworked asset restore mechanism to run only at a single worker at the time and less frequently to avoid unecessary load on the cloud resources
- Removed obsolete session thumbnail extension mechanism, reducing some networking / Cloud API load
- When "Setup IK" in advanced import is disabled, the biped detection report will no longer be generated (based on report by @H3BO3 and @chemicalcrux)
- Upgraded CloudX dependencies and related libraries
- Upgraded to Unity 2019.4.18f1 (from 2019.4.17f1)

- Merged Esperanto, Japanese and Chinese locale updates by @Melnus
- Merged Korean locale update by @LUA
- Merged French locale update by @brodokk
- Merged Czech locale update by @rampa_3 (UTC +1, DST UTC +2)
- Merged Dutch locale update by @AnotherFoxGuy
- Merged Russian locale update by @Shadow Panther [RU/EN, UTC+3]

[h2]Bugfixes:[/h2]
- Improved Inventory and File Browser interaction security (based on report by @Komdog)
- Fixed cloud asset variant processing not deleting associated queue data blobs, resulting in increased costs
- Fixed errors in cloud asset dependency processing, preventing asset dependencies from being removed in cases where even after the removal, the user would still be over their quota limit, causing the storage usage to get stuck and resulting in delays in cloud processing (based on report by @DJNightmares)
- Added generic argument validation to ValueTag and ValueFieldProxy components (based on discovery by @Cyro)
- Added guard to exceptions in OnWorldSaved and OnFocusChanged world events, to prevent errors in those from crashing the entire world (based on report by @Epsilion)
- Fixed world object transfer mechanism trying to transfer already destroyed items, causing exceptions (based on report by @Epsilion)
- Grabbable configured with DestroyOnRelease will now clean itself up when pasted as well, to prevent from getting stuck (based on report by @Epsilion)
- Fixed LogiX tooltip throwing an exception when trying to extract driver node for unsupported datatypes (based on report by @chemicalcrux)
- Potentially fixed desktop view being incorrectly rotated when using Portait mode on a screen that's connected to integrated GPU (based on report by @Freyar)
- Fixed FocusWorld node being able to keep the user in the current world when impulsed every frame (reported by @Cyro)
- Added extra logging for headset presence status changes, to help diagnose issues when the headset disconnects (based on report by @Rue Shejn | Artist 3D)

2021.1.13.764 - RNNoise noise suppression integration, new cloud CDN, fixes

Just a small build today, but some important changes. A new major addition is integration of the RNNoise noise supression library! This library uses recurrent neural networks trained on large set of data, but runs purely on CPU and is quite fast (will run even in mobile devices), while being very good at cleaning up unwanted noises (check #devlog to see some examples and more info).

The library is on by default now for everyone, which should improve the general quality of voice for existing and new users, even with poor microphones or background noises and should come handy once there are more desktop users in the future, with varying microphones. You can turn it off in settings if you prefer not to use it (or use other solution).

I'm also making some changes to the cloud, I've been investigating some sources of increased cost and found a few things that should provide major improvements. We're switching to a new CDN service (Microsoft from Akamai) and there were some optimizations. Let me know if you notice any weirdness with downloading assets or things not loading.

There's a few other tweaks and bugfixes too. I've started poking around some internals of Neos that I haven't touched in a while, slowly preparing things for the desktop support, there's a small new bit in this build of it too, though not something you can use (yet)!

[h2]New Features:[/h2]
- Integrated RNNoise noise suppression library for eliminating audio noise using recurrent neural networks
-- This is now default on for microphone input to reduce any background noise, breathing and ensure good quality voice audio for all users (can be turned off in settings)
-- You can also use it to process any audio clips in Neos through the inspector using the new Denoise (RNNoise, optimized for voice at 48 kHz) option
-- The library runs purely on CPU, no GPU/RTX support required
--- Our fork of the library: https://github.com/Frooxius/rnnoise
--- Source of our .NET wrapper for the library: https://github.com/Frooxius/RNNoise.NET

- Added Stafing property to PhysicalLocomotion which allows controlling whether strafing with secondary is supported or not (requested by @Shifty | Quality Control Lead)
-- This respects the "Allow Strafing" setting by default

- Added OverlayLayer, similar to HiddenLayer, which separates objects into a layer that always renders on screen overlay
-- Note that this is for internal use only for the screen support, I strongly advise against using this yourself

[h2]Tweaks:[/h2]
- Moved to Microsoft Azure CDN from Akamai CDN
-- This is mostly internal change, which provides more diagnostics and reduces some cloud service costs.
- Added caching of asset metadata on the cloud API to heavily reduce database queries and improve response times
- Added total/completed/failed gather job counters to the debug dialog
- Neos account in Contacts will now always appear online as bot account
- Lowered default noise gate threshold thanks the new noise supression feature (I recommend tweaking your own, you might put it lower than it was before now if you have RNNoise on)

- Merged Japanese, Esperanto and Chinese locale updates by @Melnus
- Merged Korean locale update by @LUA
- Merged Japanese locale fixes and tweaks by @かず (kazu / GitHub: kazu0617)

[h2]Bugfixes:[/h2]
- Fixed exceptions in HapticPointMapper on headless (found in a log from @Medra)
- Added extra data model diagnostics to help diagnose some issues found in log from @Medra
- Fixed exception causing the session to crash when running permision system cleanups
-- This should fix random session disconnects reported by (@Hayden, logs provided by @Polaris (she/her))

2021.1.11.642 - Screenshot inventory auto-save, voice quality improvement, fixes

Hello everyone! Sorry for the lack of builds lately, I've been taking things a bit slow recently (and working on the yearly update), but I've now started on a bunch of things again and picking up the pace again. This build has a bunch of small additions, tweaks and fixes.

One major addition is ability to auto-save any screenshots to your inventory! If you're pack-rat like me, you can define an auto-save path and have every single screenshot (including ones others took that you save through the context menu)!

I've made some changes to help improve overall voice quality and also in cases when there's packet loss as well. There's more that can be done with that, but I'll see what difference do the current changes make too.

Anyway, I hope those changes and fixes help, I'll have more stuff coming soon!

[h2]New Features:[/h2]
- Added CloudUserInfo (under Cloud/Indicators) which will fetch cloud user information for given UserID (based on request by @Ryuvi | Technical Artist)
-- Currently provides Username, Registration Date and IconURL (can be fed directly to StaticTexture2D)
- Added IgnoreReverbZones property to AudioOutput, which will prevent any reverb zones from affecting the particular audio output (based on request by @GearBell)
- Added Auto-save screenshot path to Settings, which allows any taken or saved (through the context menu) screenshots to be automatically uploaded to your inventory
-- You only provide the directory path, name is taken from screenshot itself as usual. Several variables can be used in the name:
-- Time Taken: %day%, %month%, %year%, %second%, %minute%, %hour%, %day_name%, %month_name%, %day_name_en%, %month_nameen%
-- Session start time: same as above, but with session in front, e.g. %session_day%, %session_month% etc...
-- %location_name% - name of the world/session where it was taken
-- %neos_version% - version of Neos in which this photo was taken
-- Both forward and backward slashes work
-- E.g. VR Photos\%year%\%location_name%\%month_name% %day% would save screenshot under VR Photos\2021\Neos Hub\January 11
- Added internal event mechanism when records are saved. This will now make Inventory automatically pick up any items saved by other methods (e.g. the screenshot auto-saving) and refresh the UI

[h2]Tweaks:[/h2]
- Added handling for lost voice data packets, which will now fill in the missing audio data rather than skip it
-- This should improve voice quality for users with significant packet loss ratio (based on reports by @Zephyr.С, @Zane, @Rue Shejn | Artist 3D, @Hayden and others)
- Voice messages now ignore reverb zones (based on feedback by @GearBell, @Turk, @Lewis Snow | Lead Audio Engineer and @Earthmark)
- Video players now ignore reverb zones (based on feedback by @GearBell)
- OpusStream now uses maximum encoding complexity on PC (previously complexity 2 out of 10) to provide best audio quality. Mobile platforms still use 2 to conserve CPU usage
- Inventory/Message item spawn undo message is now localized
- AssetMetadata (photo, audio) now stores the host of the session where the asset is taken, access level of the session and whether the session is currently hidden from listing
-- This information is also indexed in the cloud when the asset is saved and will be searchable/filterable in the future
- Added NonHeadlessUserCount property to OnlineUserUserCount components
- The online user count facet now shows the total number of non-headless registered users instead of the total number of registered users and has the two numbers now swapped (the approximate number of all users is in parentheses now)
- Switched back to latest youtube-dl (2021.01.08) from youtube-dlc (2020.11.11-3)
- Merged Dutch locale additions by @AnotherFoxGuy
- Merged Japanese locale tweaks and fixes by kazu0617 and @Aesc
- Merged major Czech locale proofreading revision by @rampa_3 (UTC +1, DST UTC +2)
- Merged English locale fixes by @rampa_3 (UTC +1, DST UTC +2)
- Merged Korean locale fixes by @MirPASEC
- Merged Spanish locale additions and twaeks by @Ruzert

[h2]Optimizations:[/h2]
- Added type caching for user root components lookups, speeding up frequent lookups of certain components (e.g. for interactions, dynamic bones and more)
- Some small optimizations when fetching cloud user info

[h2]Bugfixes:[/h2]
- Fixed exception in cloud API when running a record search with invalid user or group
- Added exception logging for asynchronous stream decode tasks
- Added extra logging information for failed gather jobs to help diagnose failed loads (based on reprots by @Sykes, @Enverex, @oXoMaStErSoXo and others)
- Fixed asset variant processing system getting stuck due to temporary internet disconnect and not resuming once the connection is restored
- Fixed user root components potentially returning already destroyed components, resulting in various errors when those are attempted to be used
-- This should fix null reference exceptions with certain avatars and dynamic bones, causing those to break (reported by @GearBell, @Cataena and @Shifty | Quality Control Lead)
- NeowGlowCircle is now tolerant to having its drives unassigned, preventing exceptions when parts of the visual are missing
- Fixed exceptions when validating generic types when the type is null, causing certain behaviors to break (e.g. custom generic type selection) (reported by @Epsilion)
- Fixed model import breaking on models which require transform computation on the root node (e.g. animating the scene root, as poreted by @chemicalcrux)
- Fixed LinearMapper still clamping values even when Clamp is unchecked (reported by @Alex from Alaska)
- Fixed Undo action for spawning Inventory/Message item that contains multiple items undoing only a single item (reported by @Coffee | Programmer)
- Added exception guard to the online user stats fetching loop to prevent it from breaking (based on report by @Raith (CytraX))
- Fixed being able to trigger tooltip equip action when the user is child of the tooltip, causing odd behavior (reported by @Shifty | Quality Control Lead)

Building the metaverse bonfire with the community - Neos in 2020 in review (2/2)

This is continuation from part 1, read it here first if you haven't


[h3]Neos Festa[/h3]
The Japanese team and community have been very busy as well creating a wide variety of content and have organized a festival to showcase some of the creativity only only from the Japanese community, but international one as well.



The second Neos Festa brought a few dozen creators, with each creator submitting their creation in a form of a both, that would be loaded from a user interface, containing basic information on the author and links to their profiles.

Many of the submissions took the concept of a booth in very creative ways, with some simply having showcases of their art and gadgets, while others have packed entire environments, worlds and even interactive games within the entire booth.

It’s definitely a good place to visit to check out more of the amazing content and we hope to see many more festivals like this to celebrate the creativity across different cultures and communities.

[previewyoutube][/previewyoutube]

[h3]Localization[/h3]
Another benefit of our custom UI framework UIX is a full support for Unicode and support for TTF/OTF font files. Building on this, we have implemented the first part of the localization framework, allowing Neos’ UI text to be translated into different languages.

As English is a secondary language for me and my cofounder, we know that English interface can present a barrier for many people, like our Japanese community and localization can significantly improve accessibility.

Despite that, the effort that our community has put into translating Neos into different languages has still surprised us. At the time of writing, Neos is now available in 18 different languages (and 2 variants), with maintainers of most of them regularly ensuring that all newly added text is translated.



We’re really grateful that you’ve been so passionate about localizing Neos and it has been amazing to see it in so many different languages, even if we cannot understand them ourselves.

The localization process for core UI also helped test out the underlying systems for it, which we’ll be generalizing and making usable for any user created content as well in the future. With community members from across the world, this will make your own content more easily accessible to others, no matter what their native language is.

If you’d like to help with the localization process, check out the official GitHub repository here. We’re giving any large contributors a bonus 25 GB of storage space on Neos as a little show of thanks for their effort.

[h3]Wiki and tutorials[/h3]
Yet another front where the community has been helping out is documenting Neos on the official Wiki, covering everything from the basic controls, to documenting components and LogiX and their behaviors. Having a good Wiki helps out both existing and new users and we’re really happy that many of you have dedicated your time to help on this front.

We have seen many new tutorials as well, notably by ProbablePrime, who has now published over 200 tutorials on various topics on his YouTube channel, serving as a great resource for new users and even experienced content builders.


https://www.youtube.com/c/ProbablePrime/videos

Tutorials were built directly in Neos as well, whether small items to help teach various concepts and interactions, to whole tutorial worlds, such as the community made tutorial by Earthmark. Those have shown us some good ways to teach new users basic concepts and get them comfortable in Neos.

Our long term goal is to integrate this documentation and tutorials directly into a knowledge base in Neos itself, but those channels will still remain great resources to learn more about the platform and its capabilities.

[h3]Streamers, Opera performance, first scientific studies, use in schools and more[/h3]
There were many more exciting things happening within the community over the course of the year, too many to mention all. We’ve had a few prominent streamers (big thanks to Rolfgator, SnowSoS, KimplE and many others) come check out the platform and bring in a lot of new users to the platform and showcase some of its possibilities (another big thanks to International Dance Association and JJfxMultimedia for showcasing our 11-point tracking system) to a wider audience.


https://twitter.com/bbotics/status/1342896211251011584
IDA showing off breakdance moves with elbow tracking

Plenty of smaller streamers and community members also keep streaming Neos regularly, some devoted ones (shoutout to Rukio and everyone featured on his streams) nearly every day, giving people a glimpse into everyday life, creativity and shenanigans on the platform!

Near the end of the year, the Amadeus Artists in Vienna performed in a virtual opera performance in Neos VR, with users from all over the world watching. The performance combined real world capture with virtual environments, creating a beautiful way to experience the art.

[previewyoutube][/previewyoutube]

Neos is also used in schools and in research. One of our most notable users, the Sydney Human Factors Research (SHFR) have used Neos to conduct several different studies, with the first one of them recently being published in a peer reviewed journal Plos One, with more coming.

We’re really proud that Neos could serve science as well and even more that members of the community have helped make those studies happen as well, showing the incredible power of real time collaboration across different groups, creating synergies and connecting people that would probably never have met otherwise.

Neos was also used to remotely teach two full semesters at the Czech technical university in Prague by doc. Ing. Mgr. Petr Klán, CSc. The students could watch the lectures from the comfort of their home and hand off their assignments virtually, as the school buildings were closed due to COVID.

NeosVR textbooks, written by doc. Ing. Mgr. Petr Klán, CSc. for his university course

https://www.youtube.com/watch?v=FGwB6y_JRcg
One of the university lectures, in Czech

Neos Classroom, a variant of Neos with the UI optimized for education running natively on the Oculus Quest, has also been utilized throughout the year at multiple secondary schools in the Czech Republic for remote education.

Overall 2020 has become the year where the events and community creations have crossed the point to where we can keep track of them all, let alone mention every single one deserving of a highlight, so we can only say thank you to everyone for this year. Thank you for your support, thank you for spreading the words about Neos and showing what is it capable of and thank you for your dedication to making Neos into a place full of life, creativity, joy, but also place for many to learn and grow or improve their professional work, particularly during this challenging year.

[h2]Overcoming challenges and plans for the future[/h2]
The growth of community this year hasn’t been without its challenges either. At the beginning of the year, we planned out many major features that we wanted to work on, but only some of them got prioritized, particularly the UI and UX. While all of them still remain on the roadmap, we needed to make some adjustments on how we prioritize and communicate features.

The UI update was a particularly difficult one, including on a personal level. Changing the fundamental ways of how everyone interacts with Neos brought a lot of emotion and conflicting opinions into the process, which was repeated several times as we replaced one part of the UI after another.

While the feedback and ideas were welcome and helped shape and improve the new UI significantly, the emotion and continuous pressure have taken their emotional toll. It also clashed with an effort to replace much of the UI quickly and focus on other tasks.

With no breaks in between or distractions for months this has paradoxically caused the process to take longer than it could’ve. The UI itself wasn’t the only challenge however, but rather an indication of a larger issue.

Back in 2019 the community was still small enough that any change was generally positive and most issues could be resolved quickly, but with growing numbers of users this became impossible.

With more users the variety of opinions and preferences on many of the changes increased. This led to a number of feature/change requests that couldn't all be resolved, or that would even be mutually exclusive. I'd also get more and more messages, spending a few hours almost every day just replying to people and resolving their issues.

Navigating this has become challenging, as any decision would still leave a group of people unhappy. I’ve had to start saying no to certain requests too, as they weren’t feasible, would cause too many problems or would clash with other features.

For me personally this was particularly difficult, as I always strive to ensure everyone’s happy and not being able to prevent or resolve all the resulting negativity even by spending all my time working had a compounding effect over the course of the year.

Adding to that, certain major and long awaited features, like the physics engine upgrade and many optimizations depending on it (either directly or indirectly through a planned workflow) have hit unexpected roadblocks (in case of physics, a bug in the JIT compiler), making the emotional drain worse.

As the support and community grew, I’ve pushed myself out of the equation when deciding on what to prioritize, trying to focus on what the community wants and needs and what we need for the project to grow, thinking that it would let us achieve the important milestones faster.

As an end result, I’ve found it harder and harder to work throughout the year. Making it more difficult to focus, particularly on more complex features and issues that I’ve wanted to address for a while (e.g. the full body hips fix) and making it more difficult to generally stay positive and creative.

As the pressure got worse, I constantly felt that I’m doing everything wrong and negativity dominated my every day. I had become afraid to talk about things publicly, worried they would be nitpicked, and turned into long arguments, responded to with passive aggression or used against later, even when I just shared some progress update, some behind the scenes details on what I was working on or the status of some feature.

I love Neos and the community too much for something like that to ever make me stop, so I kept going despite that, but I’ve learned that this approach doesn’t make the work go faster. Instead the opposite occurred. As this mental state ended up paralyzing my ability to make decisions or take actions. It would leave me staring blankly at my notes, or the code when pushing myself to work.

I would still try to move as fast as I could, but would end up working on issues and features that were not as complex had as few complications as possible, and the lowest potential of a negative reception. This was an important lesson for me and something I'm having to learn to deal with, as I focus on ramping up my efficiency again to make it easier to focus on the harder problems.

For the year ahead, we will instead switch back to the approach we took before to maintain better efficiency and keep the project moving faster. Instead of working on a single big thing for months uninterrupted, trying to get them to perfection based on the feedback and reports, you’ll see us switch between different priorities more.

That way we’ll advance different aspects of the platform bit by bit, rather than focusing on a single one. While it might take longer for any individual feature to get to its polished state, overall it will help maintain development momentum and keep things fresh and fun, allowing us to polish the features with a new perspective, rather than exhaustion.

You will also see more features implemented and bugs fixes that weren’t particularly high priority, but were easy enough to handle or particularly fun to work on. This was previously part of my process and I’ve slowly started including it again over the past few months, because it helps me things mentally and builds momentum to tackle the more difficult ones needed by the community.

Over the course of the year we have also expanded our team and delegated more responsibilities, particularly with moderation, quality control (handling bug reports, feature requests and so on) and shifted towards GitHub and more formal/efficient methods of communication.

Our team has helped me tremendously over the course of the year, taking care of different tasks and responsibilities and redirecting a lot of the negativity away from me, so I can focus more on the core development. They also helped advance the project on different fronts and helped collect feedback and bug reports from the growing community.

This transition caused some troubles with public communication though, as I was still being overwhelmed and the team searched for ways to shield me from a lot of the stress. As time goes and we give these things more structure, the process will become smoother.

[h3]Making sure your feedback is heard[/h3]
As the growth continues, we will be leaning more on those formal methods to keep the process manageable and make sure that as many people as possible get their concerns addressed. We hope that you’ll help us out with this process, whether by simply voting on the issues relevant to you on GitHub, providing proper structured feedback or teaching other users how to best share their own concerns and suggestions.

We won’t be able to address every single feedback of concern, so making sure to properly prioritize the ones that are affecting many users, this will continue to become more important. Using search functionalities and keeping up with the channels we put in place will also be more necessary as same questions get repeated more and more.

By upvoting issues on GitHub that you feel are important and encouraging users dealing with the same issue or wanting the same feature you can help us with this process and help ensure we focus on things that matter to the community.

If you’re creating a new issue yourself, following the instructions and filling out all the information in a clean and concise way will help improve its chances of being addressed sooner, compared to issues that are missing information, are too vague or difficult to read and process. The extra few minutes you put into your issue can save hours, even days of work on our end.

While we also understand the passion or how frustrating some things can be, we appreciate when those emotions are kept outside of the discussion. This helps keep the focus purely on the issue and prevent unnecessary stress. We’re humans too and when things are kept polite and positive, we work faster and better when we’re in it together!

Eventually we’d like to integrate the issue tracker fully into Neos itself, making it easier for users to report bugs or request features, without having to register another account and use external websites, but for the time being GitHub is going to be the primary point for those resources.

But above all, please realize that we’re not machines. We might be pushing lots of builds out, but we’re still humans and there's a limit to how much we can do and we’ll make some mistakes too. Work with us to make our job easier. Being passive aggressive, rude or angry doesn’t help anyone and only makes dealing with the issues harder and slower. Even worse, over time it results in burnout, which can push the features you care about by several months.

We’d like to keep sharing more things on development with you publicly, but that means sharing the bumps and warts that are a natural part of the process. If those cause problems and draw away the attention from the development, we’ll have to keep them internal until we’re sure that there’s no risk of shuffling priorities or pushing something for later.

We would also love to hang out with the community as regular users more and just have fun as users of the platform, rather than talk about issues and answer questions most of the time. Using the proper channels for those goes a long way (big thanks for everyone who does, we really appreciate it!) and makes it easier for us to pop up in public more without getting overwhelmed.

Most importantly our goal is to keep the development fun and engaging, to keep the ball rolling and deliver new features to you faster. Sometimes you’ll see certain things prioritized that might not be the most urgent for the community, but that help us shake up things and keep the overall pace.The Universe was born from Chaos, so it should only make sense that the Metaverse comes from a little bit too.



I appreciate everyone’s support and kindness throughout this year, they have helped tremendously when dealing with the negative parts and I hope that even the less urgent additions will keep making your Neos experience better every day and bring more fun things to play with.

[h3]Prioritizing Desktop Mode[/h3]
After a lot of debates with the team, and considering the weight the decision has on the community and ourselves, we have decided to prioritize proper desktop support as the next major feature. There are several compelling reasons to prioritize it right now, at least for the first phase of it, despite our general focus on VR first.

When designing Neos, we’re always thinking on how to make the interactions natural for VR and take full advantage of it and we’re taking the same approach with the desktop mode as well. It will be built to utilize the same subsystems that were built for VR, but make them easy to use with keyboard and mouse (or a gamepad) and unify the development for both modes going forward.

Proper desktop support has been one of the most requested major features for a while, with many people without VR headsets wanting to play, numerous users even leaving bad reviews due to the lacking support.

For a while we have ignored these requests and focused solely on VR, but with the growth of the VR community and many new users coming over and making their home in Neos, more and more users ended up moving back to other platforms, or not making the switch in the first place, because they would have to leave their non-VR friends behind.

Making the desktop experience a lot more comfortable and featured will prevent this splitting of mixed friend groups and help the growth of the VR side as well. Lack of desktop support in Neos has been a problem for event organizers as well, as event-goers are often mixed desktop/VR groups as well.

Having the desktop mode in place before other major features will help as well, particularly the Neos Store (marketplace). Once unveiled, this can bring a lot of attention from investors and buyers coming from other platforms. Requiring VR to fully interact could be a factor that puts a lot of them off, which could end up hurting the creators selling their items as well as the Neos Credits ICO and anyone holding NCR as a result.

With desktop mode already in place, all other major additions and improvements will potentially impact a much broader audience, bringing even more users to the platform.

We have already started designing the architectural changes and additions to support desktop play, unifying Neos’ underlying systems. Our current screen mode is only a quickly implemented hack, that (poorly) re-implements a few of the core systems.

By building the desktop support properly on the same subsystems that VR uses, we’ll not only make it more functional, but also eliminate a lot of recurring problems and issues (how many times have you heard the response “Desktop mode isn’t currently supported”?) and allow us to develop both desktop and VR as one going forward.

As a nice bonus, it will help with debugging as well, as some of the subsystems aren’t currently emulated or difficult to use (e.g. locomotion) and require me to jump into VR headset to test every change. The workflow will improve for mobile builds as well, allowing to quickly test mobile builds on a regular phone, without having to jump into the Quest and wait for it to boot up.

One more crucial reason why desktop is being prioritized is also a bit of a personal one. With the momentum on optimizations lost due to the unexpected roadblock and waiting on 3rd party fixes, desktop mode presents a very safe choice, as it has virtually no risk of dependencies and unexpected roadblocks, affecting only Neos’ own codebase on which we have full control. Working on this will allow to rebuild some of the momentum, which will then transfer to other major features as well.

[h3]What’s planned for desktop mode?[/h3]
As mentioned above, building the proper desktop mode on the same subsystems will help to unify parts of the codebase and make it consistent with how playing in VR behaves as well. In practice this means that things that currently don’t work or are very buggy will be now functional like locomotion modules (e.g. being able to walk and jump around), avatar behaviors, tool/gadget interactions or properly respecting the permission system.

The crucial part is making those interactions easy to use. We have a few subsystems planned for this, which will use a combination of the IK with the new capabilities of the interaction system from the UI overhaul to achieve this. For example when equipping a gun, the system will make your character automatically point at the same point that your mouse cursor is.

Some very early notes on the Desktop support

We plan on supporting both first person and third person mode as well, to allow for a variety of play and interaction, with third person supporting free-cam for easy editing as well. If you have a VR headset, the system will also support instantly switching between the two as well, rather than having to restart Neos, which will not only benefit the creative workflow, but let you stay in a world and talk with people if you need to take your headset off for a while.

Building out the basic interaction methods and the first and third person modes is our immediate goal, as this will allow desktop users to have the (nearly) same capabilities as VR users do, unify the system for content creators, removing the needs for any hacks for screen users and make the usage a lot more comfortable.

However our plans go beyond that, allowing for things like splitting up the viewport or pinning in-world UI’s to the screen (e.g. inspectors) for easy and quick access. We’d like to integrate face tracking solutions as well, to give the avatars a lot more expressivity.

Some other features will become more relevant/viable as well, such as input binding system and support for gamepads. Whether those extra features will be implemented now or later in the future is currently up in the air, we might prioritize different features, like the physics engine or more UI to mix things up, but hopefully it gives you a better idea of what our long term goals are.

Regardless the initial implementation is the most crucial, as it will make Neos significantly more accessible to users and simplify the development going forward, not just for us, but also for anyone building content in Neos.

[h3]Mentor Team[/h3]
In line with the moderation team, we’re also planning to unveil a team of mentors - community members who are interested in helping out others in official capacity. Mentors will be part of the moderation team. Turk has been appointed to lead the mentor section by Veer and will be responsible for organizing other mentors.



Helping out new users and each other is at the core of our community and something that many members already participate in (and we hope continue to participate even outside of the Mentor program) and we feel that it is a good way to help add some level of organization to the process for those who are interested.

There will be more information on the mentor team, so keep an eye out on our weekly updates if you’re interested to learn more.

[h3]More UI work, more optimizations, physics, Neos store[/h3]
Following the desktop mode, we’ll be interweaving work on several major and minor features, depending on what’s most efficient at the time. There are a few main priorities that we’ll be choosing from.

We plan on reworking more of the UI, most importantly the inventory, file browser and contacts and adding the workshop, allowing for easy sharing and searching for items, tools, avatars and any other creations in Neos, providing a proper solution instead of public shared folders. This will help new users get started too, as it’ll make it much easier to find an avatar.

With the JIT compiler bug in Unity fix recently on the way, the upgrade to BEPUv2 will be unblocked and swapping the physics engine and implementing heavy optimizations built with some of its functionality is going to be back on the menu. This will get the ball rolling for other related optimizations as well, making Neos run smoother as we go.

Not only that, but adding full support for rigid body physics will be on the top of the list as well and will happen when possible, bringing a whole new level of creativity.

Recently another blocking bug was fixed (good timing with the beginning of the new year!) as well in the libVLC library, which will allow us to continue with its integration and replace the aging UMP, improving reliability and some functionality.

Another crucial feature will be the Neos store (marketplace), giving everyone a way to sell their creations and make a living by building content in (or for) Neos. The Neos store isn’t technically a feature on itself, but rather a combination of two different features - the workshop mentioned above and the license / object ID system.

The latter will allow for marking any objects and assets with authorship and ownership information, controlling their distribution and use and allowing users to purchase a license to use the object in their own sessions.

Submitting these items to the workshop will allow them to be browsed or searched for resulting in a functional store. The benefit of this approach is that content can be bought by organically discovering it in a world or session - for example another user playing with a gadget in a session.

While those are our next main goals, they will be likely interweaved with other changes as well and adjustments based on the current needs of the project, community and the team. Our goal is to strike a good balance and keep the momentum going, so we can get to other major additions on our roadmap as soon as possible as well.

[h2]To the year ahead[/h2]

Neos is a long term project for us and we believe that the metaverse needs to be built properly, without taking shortcuts that would compromise and stall the development at some point in the future. We’re grateful that you, our community also understand and support that goal and that we can share this journey together.

The years of work on this project had many ups and downs. We’ve seen projects with teams and funding magnitudes of order larger than ours unveiling their own stabs at the metaverse while we were still laying out the underlying foundations for the engine powering ours. Now while many of those projects are gone or defunct, we’re continuing to grow, exceeding the daily user bases they ever had, still with only a fraction of the resources.

All of that is in big parts thanks to you and your support, passion and hard effort in building the metaverse with us, contributing your thoughts, creations, tools, tutorials and time organizing events, helping new users and building incredible projects to show what the platform is already capable of and what it can be.

The joy and level of creativity keeps amazing us more with each passing day and we can’t even imagine how much more will come as the community and our feature-set keeps growing. We want to make Neos as awesome as you make Neos special. Thank you for being with us and thank you for supporting this project!