Techland

Unia Europejska

“The future is now” - the endless possibilities of 3D scanning technology

“The future is now” - the endless possibilities of 3D scanning technology

What would you do if you had 32 cameras, more than a hundred actors and the right tech all at your disposal?

According to Rafał Kwaśny, Senior Technical Artist and the man in charge of Techland’s scanning department, you get a huge database for creating NPCs. The potential for scanning things doesn’t end there. We spoke to Rafał to find out more about the whole process.

When did your adventure with scanning begin?

I’ve been in this field for about 9 years now. My first position at Techland was as a technical artist responsible for handling graphics optimization. Thanks to Maciej Jamrozik, our Technical Art Director, we introduced the scanning process, and that happened very late on in the development of Dying Light. At that time, the technology was completely new and used only for titles such as the Metal Gear Solid series.

We wanted to test out the tech, so we bought the equipment and built the first version of a face-scanning studio. Scanning the actor, technologically processing a model and adding it to the engine took us 4 days back then. During the production of Dying Light 2 Stay Human, that time frame shrank down to just one day. I could scan an actor today, and by tomorrow, he’ll already be in our engine’s library.

Why did you become interested in this technology in the first place?

There are several benefits to scanning. The first one is that it allows you to significantly cut down on the production time of many assets, as well as develop them on demand in relation to what art and design teams need at the time. Secondly, it enables us to create perfect copies of human anatomy and use them to develop our NPC database. The third advantage is that, thanks to this technological leap, we can implement our scans into our C-Engine almost instantly.

What were the scanning ratio differences between Dying Light and Dying Light 2 Stay Human?

When working on Dying Light, we were just testing out the tech. As I already mentioned, we actually didn’t implement it until the final stage of the game’s development cycle, which is why we mostly scanned our developers. With Dying Light 2 Stay Human, the scale of the project was much bigger, which is why we also started hiring professional actors, but we didn’t stop adding our employees to the game. In fact, we’ve managed to scan about a third of the people working with us so far. Altogether, that makes for over a hundred faces that have bolstered our base. This makes it very easy for us to generate background characters since we can combine different scans to make entirely new models.

Are you using AI to help with this?

At this time, we’re not using AI to process scans yet. However, some solutions have been appearing on the market, for example, packs of animation poses generated by a neural network.

Photogrammetry involves preparing and processing hundreds of photos. In the near future, I see a great chance to improve the workflow of this stage in areas such as automatic color calibration, deleting blurry photos, reducing the impact of lighting, etc. — and going forward — automatically preparing projects or cleaning up models. We will likely have to wait some time before the right tools appear, but in the meantime, we’re doing a pretty good job at automating repetitive operations using scripts.

Could you describe what the scanning process looks like?

We do multiple different scans of our actors/volunteers, i.e., we ask them to perform a dozen facial expressions. The images taken by 32 synchronized cameras are then selected and color-calibrated. The next step is a 3D reconstruction of the individual facial shots, generating skin textures and adding their packs to our library. Our character art may adopt such scans and stylize them accordingly — by adding hair, eye color, and other distinct features, such as scars, etc. Naturally, this doesn’t happen instantaneously, and sometimes, we have to wait a while for our character to finally appear in the game.

Do we use these scans for other purposes?

Yes, we always ask our actors to make various expressions of joy, surprise, sadness, anger, as well as specific muscle contractions not correlated with any emotion. These are later used by our animators to create facial animation systems.

Are people reluctant to scan themselves?

The process is voluntary and it’s usually video game fans who wish to immortalize themselves in a production like this. Keep in mind, though, that our artists add hair, scars, and clothes to the characters in the game, and sometimes, it’s enough to make that person unrecognizable. We go through legal measures and sign appropriate contracts to ensure that the data, identity and likeness of these individuals are protected.

What else do you scan?

We’ve done various objects such as rock formations, architectural elements, trees, foodstuffs, clothing, interior design elements, and many others. There are currently more than 2,000 objects in our database. Scanning technology allows us to maintain a high degree of photorealism while at the same time, we don’t have to bother our artists with work, such as modeling crates of potatoes. Some of the objects in turn, like architectural elements that are not suitable for direct usage in the game, can be used as a reference for making models.

Can you give examples of objects that are easier to scan than to model for artists?

Yes, there are several such things. When exploring the world of Dying Light 2 Stay Human, the player often encounters abandoned places that are in a peculiar state of disarray. A common sight there are mattresses covered with crumpled sheets or clothes. It turned out that to realistically model such carelessly spread-out textiles would be quite time-consuming. That’s why we conveniently have a few kilos of rags here in the back room, which we’ve scanned in a variety of combinations.

What about scanning a character’s clothes? Is that easier?

It’s actually pretty complex. Static textiles work well, but the clothing on NPCs has its own set of physics as it moves with the character. A complete set of clothes requires proper preparation and additional scanning of individual elements. But we do scan clothes. I made a separate rig for it, which automatically handles 360-degree scans.

Have you ever scanned anything strange or unusual?

Bags of garbage come to mind. There are quite a lot of them in Dying Light 2 Stay Human — after all, there’s no recycling in a post-apocalyptic world — so you can comfortably land on them when jumping off of rooftops. We have hundreds of bags of garbage in our database, both individual models and in different arrangements, in piles, etc. In general, our thought process is that if we can scan something and save time for our artists, we try to include it in our library. Barrels or wooden elements are also good examples — we have a lot of those in this category as well.

Is there something you’re not allowed to scan?

Products covered by industrial design licenses cannot be used commercially, even though there are no legal restrictions when it comes to actually scanning them. For example, we would not be allowed to add a 1:1 model of a particular washing machine to our game.

How do you scan environments?

We scan a lot of trees, both in their entirety, as well as just the trunk, the grain, the bark. All these models and the various combinations of them then make their way into the game as small building blocks for the environment. We also scan smaller elements, like mushrooms and pine cones. This can definitely speed up the creation of biomes.

We also use drones for larger objects like rock formations. There is always a certain risk that nature will be unrealistic or inconsistent within the laws of physics in our game. Scanning helps to reduce the likelihood of that happening. We can also more easily utilize it as a more accurate reference for more complicated occurrences in nature like terrain erosion.

The trips we take to obtain such elements are by far my favorite part of the job. I filled up all the memory cards I had when I went to Iceland and I still saw elements of the landscape that I wanted to capture.

Naturally, scanning larger objects is a lot more complicated. The weather is usually the main culprit behind why the 3D reconstruction process will take longer.

Is there anything that can’t be scanned?

Animals, for example. It would be difficult due to the fact that it is hard to keep them still, but the main obstacle there is the fur. At this point, 3D reconstruction programs are unable to realistically reproduce hair. Therefore, our actors are scanned without it, and we also ask men to shave. Funnily enough, kiwifruits never come out right. The fuzz makes it so they won’t scan accurately, and the models need to be cleaned up heavily. Objects that shine, such as glasses, are also quite difficult, so they need to be painted with matte paint. We occasionally use a special spray that levels the shine on objects.

Your department is also involved in photogrammetry research and development. What does that mean?

Most importantly, we are constantly researching new technology, looking for new methods of scanning for greater precision, and getting orders for the more challenging objects. Part of that work leads into creating photogrammetric rigs, developing new techniques for taking images, but also developing optimal solutions for processing and preparing data.

For example, getting scans from a drone is still difficult — programming the flight path, capturing hundreds of images, and removing lighting. At Techland, we are looking to strike a balance between quality and processing time, which would increase exponentially if we didn’t adhere to a certain culture of data management.

Technological progress certainly helps us here. Thanks to intense competition in the software market, scanning software is developing all the time and helps to reconstruct objects faster, with greater accuracy and partial automation.

We also use scanning for the purposes of reverse engineering. That is, making a digital cast of an object and recreating it in real life using 3D printing. Incidentally, this technology is used in medicine. Prosthetic parts are made on the basis of tomographic images of bones and their digital casts. This technique has a wide range of applications, like facilitating the creation of 3D prints for camera mounting (e.g., I once scanned a helmet for the mocap department).

Work with us

How do you see the future of scanning technology?

To quote a classic: “The future is now, old man”. On several occasions, I would stumble upon old footage that I captured on my iPhone, and I’d realize I had enough of it to make for a scan sample. A few minutes of video taken on a phone is enough to reconstruct something like a burned-out car body with the accuracy of tens of millions of polygons in 8K.

A few years ago, this would have been unthinkable without the proper set preparation and the right photographic equipment. Aside from the increase in the number of megapixels in phones, LIDAR technology is also becoming a lot more common. Creating 3D avatars will soon become the norm.

We can certainly expect higher quality and increased processing performance. Most scanning software runs calculations on the GPU, so each new generation of graphics cards opens the door for us to do more. We can also rely on plug-ins that import scanned assets directly into game engines, including LOD (in practice, Reality Capture has this functionality already). A good example here would be Meta Human, which allows you to create a character through scanning.

I’m also hoping for AI-based editing tools that classify various materials found in objects and intelligently clean/filter them. Currently, these operations are largely done manually and often take more time than the 3D reconstruction itself.

Rafał Kwaśny, Senior Technical Artist

Rafał Kwaśny
Senior Technical Artist

When he’s not busy scanning thousands of bags of garbage, he spends his free time touring with metal bands as a singer and VJ.
He started his career in the demoscene. Having worked for three decades in gamedev, his credits include productions for a wide variety of platforms. He loves car trips and 1980s Japanese tech. At Techland, he specializes in bringing people and environmental elements into the game world.

Please enter your date of birth