When it comes to making a new game, a significant development choice is deciding between using an in-house proprietary engine or one of the established and well-supported game development platforms. In the case of Striking Distance Studios and its upcoming game The Callisto Protocol, the team chose the latter – specifically Unreal Engine version 4.27.
I had a chance to speak with Mark James, the Chief Technical Officer of Striking Distance Studios to discuss the business and development side of making a new game, why and how Unreal helped, and some of the bespoke improvements the team made to the engine.
IGN: With the immense challenge of setting up a new studio and team, how has the use of Unreal Engine been an enabler in your three year schedule?
Mark James, CTO, Striking Distance Studios: Starting with an engine that has shipped hundreds of games is a great advantage. Workflows and tools are widely understood and experience using a commercial engine makes hiring easier. There are always certain changes you want to make to the base engine based on the needs of the product, and at an early stage we decided on key areas we wanted to enhance. Not that we did this in isolation, we communicated with Epic on a regular basis on these changes to ease the integration. When you start a project you want to keep taking engine drops over the development cycle and consulting with Epic on the best way to make their changes made subsequent integrations much easier.
You use Unreal’s Simple Demolitions System and have customized this for The Callisto Protocol. What are some of these customisations, and does this extend to the dismemberment system in the game?
This was an area we created from scratch. We knew we wanted a gore system that hit all the components of a great horror game. Our Gore system blends blood spatter, chunk creation and dismemberment to create the most realistic system we could. We wanted Gore to be a diegetic health bar for each enemy representing realistic flesh, muscle and skeletal wounds. Not only was this be used on enemies, but we also used this to represent the gory player deaths. In the Callisto Protocol even losing is a visual feast!
The game utilizes ray tracing for some of its visual elements. Can you share if these are lighting and shadow-based elements from Unreal Engine 5 or have you gone in another direction?
It was important for us to achieve a physically consistent lighting and shadow model in the game. Contrast and occlusion make great horror.
Using our corridor-based scale of around 20 meters we found that around eight lights could be affecting a surface of the environment. Unfortunately, we found that UE4 was limited to four shadow-generating lights, so first we worked on modifying the engine so that we could support a higher number of lights at a lower cost per light.
We looked at the UE4 ray tracing solution at the time and found that for the number of shadows we wanted to create, we needed to create our own solution. So instead we created a Hybrid Ray Traced Shadows solution that applies ray traced shadow detail to areas of the screen that matter to the overall scene quality.
UE5 took a very different approach for lighting with Lumen that didn’t fit the internal corridor model we wanted for the game, but I’ve been very impressed with the quality of the UE5 demos so far.
With this being a cross-generation game, how has the team found the transition to the PS5, Series X and S based on the previous generation?
We created TCP with the new generation of consoles in mind. We wanted to concentrate on the advanced hardware features that these consoles delivered. We have embraced technologies such as positional audio, the lighting-fast storage, and of course the ray tracing-capable GPUs as part of the design.
That said, we’ve always maintained a scalable content generation approach to guarantee that we are able to deliver a great looking and sounding game no matter what generation you play on.
Have the previous-generation versions presented any key hurdles to overcome?
The biggest change to the new consoles was the speed of the storage device. With the SSD in these new consoles we could have seamless loading across the game.
Working this back into the previous generation’s slower HDD was the biggest design challenge. We needed to work out where to place loading volumes and in some cases loading screens where we didn’t need them on current gen.
Do you plan to extend the Console and/or PC versions with any other technical boosts beyond ray tracing, loading and possibly framerates. For example do you have denser geometry or such for current generation machines?
As a team we want to get the most out of any hardware spec we are given. We represented much more material detail, geometry density and lighting interactions than any of our previous projects. One of the goals we had early in the project was “every step was different.” We wanted to represent a world that was lived in and showed the practical design of a space prison. This meant investment in a kit-based geometry and a complex material system to represent the diversity.
You mentioned you have incorporated Unreal Engine 5 elements in your bespoke spur of UE 4.27. Can you share any details on these please?
As we worked to finish TCP on UE4 we looked at areas of UE5 that we felt would be useful for both development iteration and new console features. Epic even helped us move some of these features back into our custom-made version of the engine. There’s no big components that stand out but instead lots of smaller optimizations and workflow improvements that have helped in the final few months.
The character models, post effects and general visual rendering of characters, faces and movement is above almost all other games I have seen, with main character Jacob (Josh Duhamel) genuinely looking like a live actor on video at points. What are some of the key technical improvements here that are delivering this?
The goal of photo-realistic characters starts with capturing models and materials with the correct light response. We invested heavily in a capture validation system that enables us to switch from photograph setups for easy review of the tech and authoring status. Using this approach we concentrated tech investment in areas that differed from the photo reference and the character render. As an example, one of the key areas of tech investment for us was the correct rendering of translucency. This is shown in simple areas like how light is represented behind a character’s ear but also in our enemies rendering the translucent membranes on skin.
The horror and tension in the demos really comes out. How much has your sound team worked with the gameplay and rendering tech to enhance this and are they utilizing any new techniques with the new hardware, such as Tempest 3D Audio?
Audio is such an important part of delivering horror we wanted to give this as much technology development as rendering. We think of audio like it’s a game feature.
Our goal was a physically-based audio model that represents both directional audio and audio interactions with geometry and materials. Traditionally these models have been too CPU-intensive to be able to do quickly for real time gaming. With the new dedicated audio hardware in the new consoles, we now have the power to do this.
Sound alone gives us a tremendous sense of space even without a visual component. Getting this right creates greater immersion in the game. We use sound to create fear and tension whenever possible.
What is a key area of the game you are most proud of, be that gameplay, technology or other?
There’s so much I’m proud of in the game we’ve delivered. Be it our lighting techniques, immersive audio or our combat gameplay it’s hard to pick a favorite. The team is what I’m the most proud of. We’ve built a studio and new IP in a global pandemic all without compromise on quality. That takes true passion.