1. Kerbal Space Program 2
  2. News

Kerbal Space Program 2 News

For Science! Performance Check-In

The team has been hard at work improving the performance of Kerbal Space Program 2 since launch, so we'd like to take a moment to really showcase the difference between the Early Access launch (v0.1.0.0) and the For Science (v0.2.0.0) release coming next week on December 19. Full global release timing graphic can be found here.

Here's a look at some graphs we use to understand the game's performance and the improvements that have been made to our minimum specification and recommended specification machines:

FPS between EA launch and For Science! on minimum specs.

FPS between EA launch and For Science! on recommended specs.

This represents a huge amount of work by the development team across a number of features and releases. With performance optimization being one of the most frequent requests since launch, we're looking forward to hearing from players how this work improves the overall gameplay experience.

If you're curious about performance tuning and how we make decisions in that area, the rest of this check-in provides a high-level overview from our Engineering Team and should provide a bit of extra context for the above graphs.

[h3]Performance 101[/h3]
As you can see in the charts, performance varies quite a bit between various scenes players might find themselves in. When we’re investigating performance-related issues, we need to investigate many variables both in the game and on the physical machine. Let’s examine the basics of performance tuning and what we look at when trying to make decisions about performance optimization.

In keeping with the theme of For Science!, we should start with defining our experiments and the measurements we will use to determine if we are moving in the right direction. So, let’s start with Frames per Second (FPS), which many of you will already be familiar with. A frame is the period required to fully recalculate what has changed in the game state and redraw the scene to the player. The more FPS, the smoother the game looks and feels, as this allows us to capture inputs quicker and update the game state showing those updates to the user in a more responsive way.

When we are digging into performance, we want to look at that value in a different way: milliseconds per frame (ms). Why would we do that? Basically, we want to understand what, in a given frame, is taking up time so we can make it faster. For a 30 FPS goal, we get 33.3 milliseconds per frame, so it becomes a lot easier to set a budget for how much time a given section of the game should take. But what are we taking time from exactly? There are two pieces of hardware we generally are looking at for performance: the GPU and the CPU.

At a high level, a Graphics Processing Unit (GPU) is a piece of hardware that specializes in the types of calculations that are needed to render a given scene. It is much faster at smaller tasks in parallel than a Central Processing Unit (CPU) but much slower at running tasks in a series. In order to get the maximum performance out of a computer, we want to use both to their full potential and optimize the game accordingly.

Now some of you are thinking “What about memory, storage devices or things like that?” And you’re right that we do consider those things, but for the purposes of this overview we are going to concentrate on the biggest items that affect performance. But bonus points all around.

An example internal debug output.

One of the important parts of performance tuning is to understand what the bottleneck is in each scene, either the CPU or the GPU. In the image above the CPU is taking 44ms per frame on a scene and the GPU is only taking 19ms. The CPU is the bottleneck in this case, so it doesn’t matter how much more optimization we do on the GPU. To improve this specific case on this specific hardware, we need to determine and target the most expensive operations on the CPU.

One of the challenges of making a PC game compared to consoles is the variety of hardware choices, specifically when it comes to CPUs and GPUs. There are thousands of combinations that someone could have, and each of them is going to perform slightly differently. That doesn’t even get into memory differences or software differences like operating system patches, hardware drivers, etc. This is why we publish minimum and recommended machine specifications - to help players understand what we have tested and if their machine should be able to play the game with adequate performance.

Now that we have some background on what developers look at for performance tuning, let’s go back up to those graphs at the top. For one, these are per-configuration graphs, which are used to tell us how the game is performing on that specific hardware. We do lots of these which help us to remove the hardware variables from the tests and just look at how the given build is performing overall. It also lets us look at this data over time and see if things are improving or not. As we add features, these will always affect performance, so we need to be checking the deltas consistently to determine where we need to spend time on improvements. The other thing these graphs help determine is if a given hardware setup has abnormal issues compared to other similar setups. A given manufacturer may have optimized their hardware in such a way that we get different performance out of certain improvements and knowing that helps us optimize the game for the entirety of our player-base across a wide range of hardware options.

[hr][/hr]We hope that this has been informative! As we work towards larger performance optimization goals in the future, we’ll be sure to share updates in posts just like this – and perhaps take a more technical deep-dive in a performance-focused dev blog, so let us know if you’d like to see that!

Thanks for reading and hope you enjoy For Science!

-The KSP Team

For Science! Global Release Timing

[h3]We are days away from the launch of For Science! on December 19![/h3]

Here's a world map of when the update comes out in your time zone:


For more information of what's in the update, check out this announce news post and our Science and Tech Tree video with Tom Vinita, Feature Lead.
https://www.youtube.com/watch?v=74qcdSk9V2M

We've got some fun things staging for release!

[hr][/hr]Keep up with all things Kerbal Space Program 🚀 KSP Forums KSP Website Facebook Twitter Instagram Intercept Games Discord KSP YouTube

Developer Insights #22 - Sky's the Limit

Hello Kerbonauts. I'm Ghassen, also known as 'Blackrack,' the newest graphics programmer on the team. You have no doubt noticed that we have improved the atmosphere rendering in v0.1.5.0. Today I’m going to share with you some insights into those improvements, as well as some of the improvements that are going to be in v0.2.0.0.

[h3]Inspecting the Atmosphere[/h3]
This is how our atmosphere appeared in v0.1.4.0 on Kerbin:


We can see a very nice-looking sky. However, the effect is very subdued on the terrain, we have trouble reading the terrain topography: It is difficult to tell what we are looking at in the distance and the sense of scale escapes us. Are those mountains? Are those hills?

Cut to v0.1.5.0, we can immediately see a big improvement in the scene's readability.


We can now immediately get a sense of how far away things are and we get a better sense of scale. This is what’s known as aerial perspective.

[h3]How the Atmosphere is Rendered[/h3]
We are using a precomputed atmospheric scattering method which is standard nowadays in computer graphics, and popularized by Eric Bruneton.

It is precomputed, meaning all the heavy calculations involved in simulating how light scatters through the atmosphere are done once for all possible altitudes and sun angles, and then stored in compact and easy to access tables. The latitude and longitude of the observer on the planet does not matter because we can use symmetries and effectively just change the altitude and sun angles to get the scattering at any viewpoint.

These tables can then be used to display the effect in a very performance-friendly manner when the game is running. These are known as look-up tables. This what some of the slices in our look up tables look like:



[h3]How Aerial Perspective is Rendered[/h3]
The look-up tables I’ve described earlier can be used to find the colour of the sky for any given viewpoint inside or outside the atmosphere, as well as how much the atmosphere occludes celestial objects behind it (this is known as transmittance or also extinction, it describes how much of the original object’s light is transmitted and makes it to the observer).



The look-up tables only allow us to get the light scattered towards us from the edge of the atmosphere, and assume we are always looking towards the edge of the atmosphere, so we cannot use it to directly to get the colour of the atmosphere up to an object. This is because the look-up tables would otherwise become impractically big and would eat up our memory budget.



However, since the look-up tables allow us to get the colour of the sky from any viewpoint, we can re-express the scattered light up to a point/object as the difference between two samples to the edge of the atmosphere, starting from different positions.



We also must apply transmittance to the observer to second sample (in red on the diagram) for everything to be correct.

[h3]Putting It In-Game[/h3]
So now that we know the method to render aerial perspective, we can plug it in-game, and see what we get. Behold:


Hmm that looks really strange around the horizon, so what's happening here?

Recall that we are using look-up tables, these are loaded on the graphics card as textures, and they have limited resolution and precision (bit depth). The aerial perspective method described earlier only makes precision issues worse by taking the difference between 2 samples, especially on high variance areas (typically around the horizon) where any imprecisions are amplified.

The way to deal with this is to first inspect the look-up tables, see if anything is stored in low precision textures or with any lossy compression, and use high precision instead where needed typically (16-bit and 32-bit per channel floating point textures).

After that, we can then change the parametrization for how samples are distributed across the look-up table to maximize resolution where it is needed. The original paper offers a nice way to distribute samples, but we found that it works best for physical settings matching those of Earth, but not for some of the settings used at Kerbal scale.

Finally, we review all the lossy transformations in the math and try to minimize any loss of precision and guard against various edge cases.

This is where most of the engineering effort in implementing precomputed atmospheric scattering is spent. Right now, we have gotten our implementation to a good place, however the inherent limitations of the method means that in the future we will move to a different, non-precomputed method which doesn’t suffer from these issues and would allow us greater flexibility.

[h3]The Importance of Mie Scattering[/h3]
We simulate Rayleigh scattering (air particles), mie scattering (water droplets and aerosols) and ozone absorption, each of these is important to represent a different effect and render all the kinds of atmospheres we want.

Mie scattering has a particularly noticeable effect and can be used to make atmospheres look foggy and cinematic, all the while keeping a realistic look. I took these screenshots early in testing the atmosphere changes to illustrate the difference increasing mie scattering makes to a scene:





In the end we went with a relatively subdued setting on Kerbin and a nice heavy setting on Laythe to set them apart, also as a reward for flying to Laythe.

[h3]Atmosphere as Lighting[/h3]
Recall that we have the transmittance that we discussed earlier as the part of light that reaches the observer and objects in the atmosphere. We can now use that to light objects, by applying it to sunlight, this gives us the very nice and soft lighting you can see around sunsets and sunrises:


We can also use the transmittance on the clouds, notice how areas in direct light can get a nice reddish color, while areas not in direct light get ambient light, and we get a very nice contrast between the reddish transmittance and the faint bluish ambient:


Using the atmosphere to do lighting also simplifies artists workflow, as the alternative was to try and approximate the different lighting parameters at different times of the day via various settings and it was very difficult to make the clouds look “right” at every time of day. Now we have less work to do and it looks better and more coherent.

Speaking of clouds, next we will discuss of some of the performance improvements coming in v0.2.0.0, but first let’s see how clouds are rendered in more detail.

[h3]How Clouds are Rendered[/h3]
Modern clouds are rendered via raymarching, a technique that involves “walking” through a 3D volume, incrementally sampling properties like density and colour as we move along, and performing lighting calculations. This method provides a more accurate and visually appealing result compared to traditional rendering techniques and is very well adapted to rendering transparencies and volumetric effects. This figure shows in red all the samples we have to do for a single ray/pixel on-screen:



Because of the number of samples we must take during the raymarching process, it is very demanding performance-wise. A solution to this it to render at low resolutions and upscale.

[h3]Temporal Upscaling[/h3]
Temporal upscaling was introduced in v0.1.5.0, the idea is to render a different subset of the pixels every frame. This is similar to checkerboard rendering if you’re familiar with the concept but generalized and not locked to half resolution rendering. This diagram shows how 4x temporal upscaling works, a full resolution image is reconstructed over 4 frames:


In movement, the old pixels are moved to where they should be on the current frame, based on their position in space and how much the camera moved from the last frame, this is called reprojection.

After moving the old pixels, their colour is validated against neighbouring new pixels, to minimize temporal artifacts, this is called neighbourhood clipping and is the foundation of modern temporal techniques like TAA.

Despite the neighbourhood clipping, we were still getting artifacts and issues after this stage in motion, due to the high number of “old” pixels compared to “new” pixels, typically this manifests itself as smearing or flickering. Our solution was to re-render the problematic areas separately at normal resolution, since these areas are only a small part of the final image.

This sounds great in theory, but while flying around clouds in a fairly heavy scenario we can see the following timings on a 2080 super at 1440p:
  • Low-resolution rendering of new pixels: 5.45 ms
  • Reproject old pixels and assemble full resolution image: 0.12 ms
  • Re-rendering of problem areas: 4.11 ms
  • Process and add clouds to the rest of the image: 0.09 ms
For perspective, if we want to reach 60 fps we need to render in ~16.6 ms, so this step seems to take a sizable chunk of rendering time in v0.1.5.0, even though we are rendering faster than we did in v0.1.4.0 using this approach.

This is because those re-rendered areas are at the edges of clouds where rays must travel furthest and evaluate the most samples before becoming opaque or reaching the boundary of the layer.

For v0.2.0.0 we took a bit more inspiration from temporal techniques to find an alternative solution to re-rendering problem areas: If colour-based neighbourhood clipping isn’t sufficient, we can use depth and motion information like speed and direction of movement (on-screen) to try and identify when reprojected pixels don’t belong to the same cloud surface/area and invalidate them as needed. The idea is to store all this information from the previous frame, and every frame we do a comparison with the previous one to get a probability that a reprojected pixel/colour does not belong to the same surface we are currently rendering.

After some implementation and tweaking this ended up working well and we can see the following improvement in rendering performance (screenshots taken on a 2080 super at 1440p, framerate counter in top left):

On the launchpad we went from 77 to 91 fps

In-flight around the cloud layer, we went from 54 to 71 fps

That’s about a 17-31% performance improvement on the whole frame and we save 2 ms to 4 ms on the rendering of the clouds.

You can look forward to these performance improvements and more in v0.2.0.0, out on December 19th!

[hr][/hr]Keep up with all things Kerbal Space Program 🚀 KSP Forums KSP Website Facebook Twitter Instagram Intercept Games Discord KSP YouTube

Dev Chat - Science and Tech Tree

Hi Kerbonauts!

As we get closer to the For Science! update, another Dev Chat has dropped - sit down with Creative Director Nate Simpson and Feature Lead Tom "FRIIIDAAAAAAAAY" Vinita as they deep dive into Exploration Mode and the tech tree, upcoming features of the update. Watch Nate and Tom play through
the mode's first mission called Launch a Rocket, collect Science on Kerbin, and progress through the tech tree!

(It's not Friday, please don't tell Tom. He was very excited to shout the day.)

Be sure to also check out the sneak peeks we've been putting out, highlighting some goodies coming to For Science!.

There aren't any sneak peeks at the end of the Dev Chat...except in case you missed the announcement...

[h5]THE FOR SCIENCE! UPDATE WILL BE OUT DECEMBER 19, 2023!![/h5]
More details on global release timing to come. We can't wait to share!

For all you pixel investigators, shots of the tech tree below!


[previewyoutube][/previewyoutube]
[hr][/hr]Keep up with all things Kerbal Space Program 🚀 KSP Forums KSP Website Facebook Twitter Instagram Intercept Games Discord KSP YouTube

For Science! Release Date Announcement 📢

Attention Kerbonauts!

We're ecstatic to share that...

[h3]THE FOR SCIENCE! UPDATE WILL BE OUT DECEMBER 19, 2023!![/h3]
[h5](Yes, in three weeks!)[/h5]

Check out our last news post for information on some features to expect! And stay tuned for an all-new Dev Chat later this week that deep dives into everything this milestone update has to offer.

We'll share more details about global release timing when we get closer to launch. We can't wait for you all to get your hands on For Science!!


[hr][/hr]Keep up with all things Kerbal Space Program 🚀 KSP Forums KSP Website Facebook Twitter Instagram Intercept Games Discord KSP YouTube