Website powered by

Making of Fallen Warrior

Pre production

The goal for this project was to create a character model suitable for a video game targeting PC/console platforms.

Starting with the initial prompt of “A warrior who has fallen in battle” I started to design the character by searching for inspiration and reference material and building a reference board that was referenced and added to throughout the project.

With an idea in mind, I created silhouette sketches and a rough 3D blockout that helped me to refine the idea and find the key primary shapes and silhouette of the design. I was also able to understand how the design will work in 3D.

The initial blockout contained elements, such as the wing and sword, that would later be removed due to scope constraints.

High detail mesh

With a design confirmed, I started producing a high detail mesh. Using the block out model as a base, I created subdivision meshes that contained quad topology and holding edges that would support the shape of the mesh when a subdivision algorithm was applied.

Armor damage and face anatomy detail was added to the mesh by hand sculpting with Zbrush tools such as trim dynamic, dam standard, hpolish, clay buildup, move and standard brushes. Skin pore details and further armor damage was added with noise generation and brush alphas that were dragged out onto the mesh with drag rect brushes.

Ornamental details for the armor were created by painting a mask onto the model and using inflate to push the details out of the mesh. These details were then refined by hand using dam standard and hpolish.

A realistic base for cloth folds on the trousers, jacket and shawl were created using the cloth simulation tools in Zbrush. The folds were then refined by hand using the standard brush to create interesting detail and silhouette and to create areas around the strap meshes that looked like the straps were tightened into the cloth. Seams were then modeled into the clothing by separating the mesh along a seam and moving one part of the mesh on top of the other.

Thread detail on the straps was added by creating an insert brush that would allow a mesh representing an individual thread to be inserted along a curve. A curve could then be drawn onto the model where a thread was desired.

The same technique was used to create a block out of the hair that better represented what it would look like in the final result. This block out was used as a guide when placing the hair cards later on.

Low detail mesh

The high detail meshes were retopologized to create the mesh that would be imported into the game engine. This was necessary as this project would be rendered in real time at interactive frame rates and a high detail mesh would take too long to render each frame, reducing frame rates.

Retopology was done in Blender, using snapping tools to create a lower detail mesh on top of the higher detail one. The retopologized mesh contained the main edge loops needed to support deformation with additional vertices being cut in where needed to support the silhouette of the high detail mesh.

Overlapping pieces of geometry were given a similar topology to prevent clipping from occurring during deformation.

Parts of the mesh would have a cloth simulation applied to them in the game engine, therefore needed to maintain a higher density of vertices and evenly sized quads. This was used for the loose cloth and hair geometry.

I then created a deformation rig and bound the low detail mesh to it. This involved painting skin weight data for each vertex, encoding how much influence each joint in the rig would have on each vertex.

To weight areas of the model where vertices of separate meshes needed to share the same weight, the meshes were merged together using a quad remeshing algorithm inside Blender. Weights were applied to the remeshed topology and then transferred back to the original meshes. The vertices in the original mesh copied the weight of the nearest vertex in the remeshed topology.

The model was then split into UV sets decided by the shaders that would be used to render each part of the model as different shaders would be used to render skin, cloth and hair. Each UV set is represented by a different color in the image below.

The meshes belonging to each UV set were unwrapped to create texture coordinates. Seams were marked along edges to cut and the resulting shells were straightened as much as possible to aid the texturing workflow whilst aiming to minimize texture distortion. All UV shells were packed into there own 0-1 UV space. 

A higher texel density was given to the UV set containing the face as this would be an area of focus for the model and would benefit from more texture resolution.

Hair

Hair was created using plane geometry to represent cards and placed onto the model by hand using curves. The high detail block out mesh was used as a guide.


Before placing the hair cards, I created the textures for the hair as these would be useful while placing the cards to get a closer representation of what the final hair would look like as it was being built. 

The particle system inside Blender was used to simulate strands in hair strips of varying thicknesses. A thick dense strip was created that would be used to cover the main areas of hair, a medium density strip would be used to build volume into the hair and thin flyaway strips would be used to add detail and interesting shapes to the hair and its silhouette.

The hair particles were converted to meshes and the mesh strands were randomly split into separate objects. This was done to bake an object ID map that could be used to mask random strands during texturing and give each strand group slightly different color values, adding variation, break up and interest to the final hair color texture.

The hair textures, including object ID, opacity, ambient occlusion, normal, height and position, were baked from the particle meshes to a flat plane.

Hair card planes used the opacity texture as an opacity mask and the ambient occlusion texture as color to give a representation of how the final hair would look. Each card had an extra edge added down the middle that was used to create a sense of volume for the card when it was viewed from the side instead of being completely flat.

A curve was used to place each card onto the model. I started by placing thick cards to cover areas of the mesh that were occluded by the hair. Then a first pass of volume build up cards was placed followed by a second pass of volume build up. Finally, flyaway cards were placed.

With the hair cards placed, I modeled a higher resolution sphere mesh that surrounded the hair cards. This cage mesh was used to modify the vertex normals of the vertices in the hair card planes. The vertex normals in the hair card planes copied the nearest vertex normal from the surrounding cage mesh resulting in plane vertex normals that pointed outwards as a single spherical surface. The vertex normals can be seen in the image below.

The shader used to render the hair would not sample the baked normal map for lighting calculations but use the geometry’s vertex normals instead. This would create a soft, smooth lighting effect on the hair when lit in the game engine as the hair planes would be lit as if they were a single spherical surface instead of individual planes facing in different directions.

Texturing

Substance Painter was used to paint texture data for each UV set. I blocked in the main colors and PBR material data to see the initial look of the character and tweak the colors and materials where appropriate. 

The skin color texture for the character’s head was hand painted. Red, yellow and blue color zones were painted in and blended together to create a base for the skin color with details such as color break up, skin moles and dirt were painted on top. An exaggerated representation of the skin's color zones can be seen in the image below.

Roughness and specular maps were painted to make parts of the face, such as the cheeks, tip of the nose and lips, reflect more light. The baked cavity map was used to create less reflection inside skin pores.

The material for loose cloth sections, in the below image, was built using Substance Designer. This allowed me to design the warp and weft of the fabric and include details like damaged threads and color and roughness variation. Parameters were exposed in the graph that could be tweaked when the published archive was imported into Substance Painter. This allowed the color, normal intensity and roughness of the graph to be tweaked when applied to the model and viewable in relation to the complete model.

Additional details were added to the loose cloth in Substance Painter including patterns and damage. Damage was painted into the opacity channel to add loose threads and torn areas.

In order to color the threads on the straps I baked a separate object ID texture that could be used to mask the threads and paint a separate material for them. The thread object ID mask texture can be seen below. 

Texture data was packed into channels to reduce the amount of textures that would need to be loaded into memory by reducing unused texture channels. 

Single channel grayscale data such as occlusion, roughness, metallic and opacity was packed into the individual RGBA channels of a single texture instead of having an individual RGBA texture for each which contained identical data in each channel. An example of the packed occlusion, roughness, metallic texture used for the character where the data is packed into the texture's RGB channel respectively can be seen below.

Shading

Different shaders were used on parts of the model that responded differently to light impacting the surface. This was needed to be able to use a different lighting model for each surface. In Unreal Engine, this meant that each part of the model with a different light response used its own material and would require a separate draw call.

Materials were designed to have a master material that exposed parameters that can be tweaked by instances of the master material. This would allow the main implementation of the material to be created once and tweaked for multiple different types of that material.

For this character, separate materials were used for each UV set that was created earlier. There is one for the hair, skin, cloth and the rest of the character. Hair uses Unreal Engine’s hair shading model, skin uses the preintegrated skin shading model, cloth uses the cloth shading model and the rest of the character uses the default lit shading model.

Additional micro detail was added to the skin material to add seemingly high resolution pore detail. A small low resolution normal map containing skin pore normal data was combined with the normals sampled from the skin normal map created in Substance Painter. The small normal map is tiled to add additional detail independently of the texture resolution for the main model textures.

Physics

The cloth physics system provided in Unreal Engine was used to simulate loose clothing and hair sections of the model in real time.

Simple collision shapes, such as capsules, were used to prevent the simulated section of mesh from clipping through non simulated parts of the model.

Simulation parameters were tweaked to achieve a stiffer looking simulation. The cloth data’s Anim Drive Stiffness or iteration count could be adjusted to obtain a stiffer simulation however, increasing the iteration count incurs a performance cost in the CPU. Due to this, only the Anim Drive Stiffness parameter was modified.

Posing

The deformation rig was posed and keyframed before being exported into the game engine ready for presentation.

Lighting

I used a HDR of an overcast scene to act as a main ambient light. This gave the scene a base amount of neutral light, removing any completely dark areas from the model, that could have its intensity adjusted. The HDR used was authored by Jarod Guest and Sergej Majboroda and is available here.

A directional and point light were used as fill lights to illuminate the model from above and to the side.

An area light was used as a rim light to create a highlight around the back of the model and make the subject pop out of the background.


Finally, a global post process volume was used to manually control the exposure.

Rendering

The model was rendered in real time as if being used in a video game. A cine camera, provided in Unreal Engine, was used to view the model as this allowed the focus distance to be manually controlled.