1. New Heights
  2. News

New Heights News

Studio and Game introduction


First of all, thank you to those of you who have already played our game, wishlisted it, or interacted with the New Heights community so far. We here at Wikkl Works can’t wait to see even more players trying out the game during Steam Next Fest starting June 19th. As we’re still a small community, we wanted to introduce ourselves and give a quick introduction to New Heights.

We are an independent Utrecht-based Dutch studio founded by experienced game developers and climbers. Our team members are Guido (Creative Director), Finn (Producer), Geert (Senior Developer), Timon (Game and UI Developer) and Niels (Game Developer). After years of creating games for our partners we decided as a team that it was the right time to create something of our own and that is why we’ve created New Heights with the goal of making a realistic climbing and bouldering game.



[h3]So what makes New Heights different from other climbing games out there?[/h3]
New Heights is all about creating a realistic and immersive experience. With physics-based climbing mechanics players will have to test their balance and grip in order to climb each of the in-game locations, that are all from out in the real world. We personally go out and scan locations using drones and implement them into the game using our photogrammetry pipeline. Every hand hold and every divot is exactly as it is in real life.

We want players to be able to climb things that they may never have the chance to. That’s why we chose real-world locations for our game. If you’ve ever wanted to become a climber but haven’t had the chance to try or to travel, New Heights lets you give it a shot!

We have loads of plans for future additions to the game, but when players pick up the demo during Next Fest they’ll be able to first learn the controls in our in-depth tutorial and then climb 2 beautiful locations. As we head into Early Access later this year, players are expected to be able to experience upwards of 12 hours of content!



Thank you all again for your continued support and interest in New Heights. We can’t wait to see players climbing and mastering each of the locations available in the game. Let’s reach New Heights together!

Photogram­me­try on Steroids - Part 5 (final part): Getting it all in the game


As the final part of the photogrammetry process for us it’s all about getting the model in the game. Even though we decimated the mesh in the previous step it’s still in the tens of millions of faces and comes in one big piece. If we would just drop this into the game it might still run technically, but having such a high definition is not needed for most of the mesh most of the time.

The first solution you might think of is to decimate it further. This would be a perfectly fine solution for a lot of games, but in New Heights the climbing system actually needs the definition of the mesh to decide if something is climbable. So we still need to have the dense mesh, but when you’re at a distance we can still show the lower resolution version. Using LODs (Level of Detail) we can achieve this right in Unity.

The only problem with this approach is that when you get close to the mesh it will still go to the full resolution version which will drop your framerate fast. Our meshes are also very large inside of the game world, because they are actually scanned cliffs. This means that when you’re on the edge of the cliff on one side it would put the whole cliff as the dense mesh, even though most of it is still really far away.

With this in mind we decided it would be a good plan to cut the dense mesh into chunks. This way we could create individual LODs for each chunk and only show the dense version when you’re actually close enough that you can climb on it.



For the cutting of the mesh of the cliff and decimating for each of the LODs, we decided that it was time to switch programs and move to Blender. This allowed us to use the power of a fully blown 3D program together with Python to automate it.

The Python script was fully made with our pipeline in mind. So the input will always be the textured mesh from MetaShape and the output should be easy to integrate into Unity. Even though Python might not be the hardest language to work in, it took us quite a bit of testing, iterating and tweaking to get everything working and connecting properly. Any small misalignment would show up in the game as seems and ridges.

In the end we got the script dialed in and now we can fully automatically cut the cliff mesh into chunks, then create multiple decimated versions of those chunks (1 for each of the LODs) and structure them in a way that would be easy to create an import script in Unity to put them all together again.

The output of Blender ended up becoming a root folder for the cliff with a bunch of folders for each of the chunks. Then in those chunk folders would be the FBX files for each of the LODs (in our case 3 for now). With that structure in mind we created an import script in Unity that takes the root folder as input and then creates all the GameObjects for each chunk with the LOD groups filled with the correct meshes. In the end getting us a nice prefab, ready for use!



That is the overview of our full photogrammetry pipeline. All the way from the reasoning behind why we are using it, testing out if it would even work, doing our firsts scans, creating our first models and getting those models functional in the game. As you can imagine we are always looking at optimizing the workflow by automating more, improving the quality of the final prefab, decreasing the processing time and tweaking every step of the way. Currently we are looking into how we can better remove noise and fill holes in the mesh.

We hope that this overview about how we approach photogrammetry in games can help you when you want to use (meganormous) photogrammetry assets in your game. Let me know if you need any help!

Photogrammetry on Steroids - Part 4 From 2D to 3D


Back in the office, with gigabytes of video footage from Al Legne, Rocher du Casino and Crèvecoeur it was time to start processing. This might come as a big surprise to no one, but most if not all photogrammetry programs are built around photos and not videos.

Taking still images out of a video is trivial to do with ffmpeg. First we started by just taking an image at an interval of 3 seconds. This would make sure there is enough overlap between the images. Feeding these images into Agisoft’s Metashape already gave quite good results, but we had a hard time having some images aligned. When we looked at those images we noticed they were very blurry..



Looking at the original video we saw that around the blurry image there should be more than enough footage to get a better frame out. So we used ffmpeg to get more frames out, then used blur detection to find the best image and we used those as input to Metashape. Actually to our own surprise using this method has given us a 99%+ aligned on the photos!

[h3]What does this actually mean though?[/h3]
Converting the images to a 3D model happens in several steps that all build on the step before. The first one is the alignment of the photos. What this means is that the program will look at features of a picture (this about hard corners, etc) and try to find them in multiple pictures. When enough of these features are found in multiple images it can then triangulate the position of where the photos were taken from and align the virtual camera for that image.

After having your images aligned the second step is to create a dense point cloud. In this step the program will use things like curvature to find many many more points that overlap between the features found in the last step. All these points are found in 3D again using triangulation. When this step is done you can already start to see a vague version of what will be the output.

Based on this dense point cloud the program can now start to make a mesh. It does this by some version of triangulation, and yes this is very confusing because this is a totally different kind of triangulation then used in the previous steps. Here it actually means “to divide into triangles”. After the mesh is built there are most likely a bunch of floating bits and ugly edges. These can easily be removed with some filtering. This mesh is actually even too detailed for us, because it breaks almost every program we tried to put it into. So we use the decimate function in Metashape to bring it back to still very large, but not meganormous numbers of vertices. Don’t throw away that high detailed version though, because we still need it in a later step!!

A mesh is still very boring though without having any textures on it. So with the power of Metashape we are even able to generate albedo textures from the images. Here is where a little bit of good judgment comes up. Because you need to pick the size and number of textures you want to create for the mesh. You might think just picking the highest resolution and a high number of textures would give the best results, but there is a very strong limit on the quality of the textures. This all depends on the input pictures. Metashap is a very magical program, but one thing it does not do is use AI to enhance or content aware fill parts of the textures. Another thing you need to keep in mind is that the higher resolution and number of textures you want to export the longer it will take and the more system resources you will need. For our computer with 64GB of RAM the limit was around four 16k textures.

The model still kind of looks flat. This is because there is no normal map yet. We can easily get this by using the high detailed version of our mesh. This step will bake in all of those details into the decimated mesh, making it so the light will reflect better how it originally should. After the normal map we also generate an occlusion map to get those really nice ambient occlusion depth right into the model.



Now that we have the model we can actually prepare it for use in New Heights. This is what the next part will be about. Having super detailed meshes is really amazing of course, but having a good framerate is also very preferable over a slideshow in a game.

Photogrammetry on Steroids - Part 3 From scouting to scanning


Our studio is located in Utrecht (the Netherlands) and there aren’t any particularly interesting cliffs in the next town over or even in the country. So we started looking for what actually are the closest outdoor climbing locations and settled that the Belgian Ardennes at Frëyr and Rocher du Casino were probably going to be a good balance of some proper cliffs with a wide range of difficulties, without having to travel by airplane. As you can see from the thumbnail image, Al Legne (at Frëyr) is a pretty huge cliff with the human for scale at the arrow.

[h3]Scouting[/h3]
To get high quality data it is optimal to have a nice overcast day and try to avoid any moving inconsistencies between the photos. Things like leaves, animals and even people will have a major impact on your data. We chose the end of November for our trip, because there should be fewer people trying to climb and more importantly the trees don’t have any leaves anymore. Time to book a hotel and get the preparations going!

[h3]Preparing[/h3]
Maybe one of the most important steps to a successful scan of a cliff is the preparations. Please do not underestimate this, we are well aware of the excitement you get from being close to the cliff and flying your drone about, but it has to be done safely!
  • Be prepared to be outside for hours. So have enough warm clothes, food and (hot) drinks. It’s even a good idea to bring a tarp or a tent with some foldable chairs so you have a place to keep your things dry, out of the wind and take a rest.
  • Bring enough power! Depending on the size of the cliff you need to have several battery packs for the drone and probably even have a way to recharge those packs, but also (VERY IMPORTANT) have multiple phones to control the drone with. Those phone batteries will drain really fast, especially when it’s cold outside.
  • Since we are scanning climbing cliffs it’s of course also important to bring some safety gear, think about mountain shoes, ropes, carabiners, also depending on the location it might be smart to bring a helmet. And last but not least it never hurts to have a first aid kit at hand.




[h3]Scanning[/h3]
Now that we are ready to go and our batteries are charged it’s time to go. Skipping some of the overhead logistics I’ll just move on to the moment we start the scanning. Based on the tests we did earlier we will use the drone to record video and use the start-stop flying to ensure we have enough frames with low blur.

We decided that it would probably be good for our source data to have multiple perspectives of the same area. To do this we fly the drone up with the camera tilted +24° and then in the same location fly the drone down with the camera tilted -45°. This would make sure that we give the photogrammetry software more information about the 3D details. After doing one vertical strip we would stop the video, move the drone a couple of meters horizontally (ensuring about 30% overlap between the strips), start a new video and then fly the new strip. When we see a particularly interesting / detailed area we would also fly some extra perspectives around it to make sure we get that data in our dataset.



After hours of flying the drone and many gigabytes of video footage later we even had some time left and decided that it would be cool to try and scan a ruin in and see if it would be good content for the game (spoiler alert: this worked great!). Now it was for sure time to head back home. In the next part we will talk about how we process these videos to get a meganormous 3D model out.

Photogrammetry on Steroids - Part 2 How will this work?


It’s all good and well to decide you want to try something completely new (like photogrammetry in our case) as a core part of your game, but you have to then start with doing some tests. We looked online, did some research and read a lot of valuable methods that are used in the field already. For our case we have a very concrete and consistent thing that we want to scan though: cliffs, boulders and structures. This gave us a very clear goal and limitations. Time to test out the methods in practice!

First time we went out with our phones and took a bunch of pictures of some boulders. The results were surprisingly good, which gave us a lot of hope that we actually made a good choice.



Based on those first tests we were convinced that we would be able to use photogrammetry for creating the models, but if we want to scan a whole cliff it would take quite a while to climb every part of it with a phone in our hand. Therefore we decided that it would be a good idea to buy a drone and get the licenses to be able to legally and safely fly it.

Our drone of choice is the DJI Air 2S. It has a great sensor, really good image stabilization with a gimbal, it’s compact and light, and comes with actually decent software to control it. So Guido took it on himself to become our professional drone pilot. He actually didn’t have too many issues with getting the licenses. Honestly the hardest part was figuring out which licenses are required to have.



Now with the drone at hand or rather in the air we started looking into a good methodology for collecting the photos needed. We found an old brick factory close to our office with enough space around it so we can safely try out some things. We started by moving, shooting a photo, moving, shooting a photo, etc. It proved very inefficient! The drone needs to be fully stationary, else the photo gets blurred and the 3D result will only be worsened by that picture. And it is intensely time / energy consuming.



Next we tried shooting videos, because maybe the quality would already be good enough. We extracted the frames from the video at an interval. Better! And most importantly: easier! This allows us to move, keep the drone still for a moment, move, still, etc. Especially for the scale that we are capturing at having a more efficient process would save us potentially hours or even days of time. This would then balance out the lower quality photos by having a more consistent lighting on the location.



With a methodology tested and approved for quality we decided that it was time for the real deal. In the next part we will be talking about how we decided where to go, how to prepare for a photo scan of a cliff and how we applied our methodology at scale.