Normal Maps

What are Normal Maps?

A normal map is an image that stores a direction in each pixel- these directions are known as normals.

The RGB channels are used to control the direction of each pixel’s normal.

A normal map is commonly used to fake high-resolution details on a low-resolution model. Each pixel of the map stores the surface slope of the original high-res mesh at that point. This creates the illusion of more surface detail or better curvature. However, the silhouette of the model doesn’t change.

Normal Maps explained. (YouTube, 2017).

Normal maps are similar to bump maps in that they effect the normals, creating the illusion of detail without having to rely on a high poly count.The main difference between bump maps and normal maps is that bump maps only records height information, whereas normal maps used RGB values to show the orientation of the surface normals. These values in the RGB channels correspond to the XYZ locations on a graph.

Used today to bake high poly details onto low poly maps, giving high resolution details.

There are two types of normal maps, tangent space and object space.

Tangent space are blueish purple and can be used on objects that need to deform, such as an animated character.


An older tangent spaced normal map from our dinosaur model.

Object space normal maps are more of a rainbow colour, and have slightly improved performance compared to the tangent based maps. Object space are used on objects that need to move and rotate, but not necessarily deform.


Object spaced normal map on a tyre. (, 2017).

Using Normal maps in Arnold

-When using Color Management in Maya (2017), normal maps should be set to RAW

– In 2D attributes, the colour space must be set to tangent spaced normals

image2016-2-26 11-50-4.png

– flipping certain channels may be necessary. FlipR- flips the red channel,  FlipG flips the green channel and Swap tangents swaps the Red and Green channels on the normal map.


YouTube. (2017). CG101: What is a Normal Map?. [online] Available at: [Accessed 21 Mar. 2017]. (2017). Index. [online] Available at: [Accessed 21 Mar. 2017].



Subsurface Scattering

What is Subsurface Scattering

Subsurface scattering works by stimulating how light penetrates a translucent surface like a grape, and is absorbed and scattered. Subsurface scattering is critical for creating materials for all different kinds of surfaces like paper, marble, wax, and most importantly, skin. If there is no subsurface scattering on skin, then it won’t look realistic because there is a look that skin has- with a level of translucency. (, 2017).

When light hits a surface, multiple things happen at once; some of the light is reflected off the surface, giving specular light. However, with some materials that have a level of translucency some of the light rays are actually going to be absorbed into the surface. Once inside, the light rays will scatter all around and exit the surface at different locations, providing subsurface scattering. It may not be apparent at first, but human skin is actually very translucent. To see an example of this, hold a bright light behind someone’s ear. You’ll be able to see how the light is absorbed, and passes through the skin. It also illuminates the inner workings so you’ll likely be able to see blood vessels, skin pores, etc. (, 2017).


Lighting shining through the fingers. (, 2017).

Translucent object have two major components and subsurface scattering will allow you to create both of these components.

The first is forward scattering, this is when light enters the front of the object and is reflected back toward the viewer. This is what actually gives materials their soft appearance, like wax or skin.

The second component is called back scattering, and this occurs when a light is illuminating the backside of the object, and the light rays actually pass completely through to the other side. As mentioned previously, a great example of this would be if you set up a light behind someone’s ear or placed your hand directly in front of a light source.

Without subsurface scattering it would be very difficult to simulate these two distinct components of translucent materials like wax or skin.

The physics behind Subsurface Scattering

When a photon hits a surface, it has the change to be either reflect, absorb, or penetrate the surface. These two characteristics show reflectance of local illumination models: Where the final color at any point, need only consider that point with no influence from other points. This concept is implemented in a number of shading models (or BRDFs – bidirectional reflectance distribution functions) that you may be familiar with: Blinn, Phong, Lambert, etc. These models approximate local illumination well, but when a photon enters into a translucent surface and bounces around inside, it may very well exit the surface at a point where it did not enter. This is what happens in subsurface scattering, light enters a point P and exits at another point, thereby making local illumination models unfit for representing these non-local effects.

Tutorial on Arnold and SSS

References (2017). Understanding Subsurface Scattering – Capturing the Appearance of Translucent Materials. [online] Available at: [Accessed 21 Mar. 2017].


Dinosaur Blend Shapes

To create the dinosaur’s blend shapes, I wanted to use a reference that I could match as close as possible. In this case it was Aardman’s Purple and Brown.

Brown and Purple. (YouTube, 2017).

I was able to slow the footage down and focus on the chewing action of brown. I noted the following key shapes and drew them out to model.

Below is the final model for the blend shapes. I also included a smile for the final shot of the completed logo.

Screen Shot 2017-03-19 at 22.13.41

Overall,I think the facial expressions look pretty good! And my team really liked them too.

Screen Shot 2017-03-19 at 21.33.08

Dinosaur Texturing

As mentioned before, our dinosaur is to be created like a morph/plasticine look. After researching a bit further I came up with the following pipeline.

Model high poly in Z Brush- create low poly version in Maya by retopologising- UV map the low poly model- bring low poly into ZBrush and create the cracks and smudges- bake the high to low poly in X Normal or Substance painter.

High Poly Model


This was originally done by Even (the task of modelling, UVing and retopology assigned to her). The role was then taken on by Jack and myself.

The below is the original mesh we were given. Jack took the object and created a smoother model, we found areas like the legs had the mesh inverted and corrupted as the original retopology attempt was the high poly decimated to the extreme.

Retopology of High to Low

Jack and myself then worked on the topology together. Our original plan was to create both an ‘easy’ and a ‘hard’ rig set up. The easy being the rig only in the neck and tail, as these are the only parts really moving. Our hard rig would be the entire quadruped rig (Robert’s task).

Jack tried using the Z-Remesher in Z Brush to get perfect topology, however, this didn’t work out- giving n-gons and a lot of uneven areas. I tried using the normal maps from unwrapping it, however, it was creating odd artefacts, especially around the areas were the  spikes were. A compromise we had was to create the spikes on a separate mesh instead.


Z-Remesher did not work out (above).


We decided to use quad draw to achieve the topology need, based on references we found online for dogs etc. We wanted the topology to be able to work with both the ‘easy’ and ‘hard’ rig setups.

Detailing in Zbrush

Using Jude’s advice I was able to plan out the texturing of the dinosaur, compiling a list of things this should include; smudges, scratches and finger prints.

I created the finger prints through creating my own alphas on ZBrush. I used the below video as a guide when I was unable to find any free ones available (which was surprising).

Creating alphas in Zbrush. (YouTube, 2017).

Baking the high to the low poly  

I experimented with two different methods for baking here. Through the use of XNormal and then Substance Painter.   

Firstly, I tried X-Normal. had excellent documentation for creating perfect  bakes, having tested basic methods before, I wanted to look into how to create this more accurately.

The article above suggested taking into account the following things;

Adjust the ray distance and default mesh scale= the scale between all 3D packages is different, to make sure xNormal has a good enough result and to make sure the rays hit a big enough mesh we should set the default mesh scale in xNormal to 16. This way we make sure the mesh is big enough to catch all the rays.

We should also set the ray distance to a higher value then the default one. The default value in xNormal is 0.5 which is too low and a lot of rays miss the target and double projections occur. A good value to start out with is to set our ray distance to 50 and tweak it from there.


Setting the correct bucket size and antialising settings=

The next setting we want to change is our bucket size. Normally we used to have the smallest bucket size (16) because the general impression was that it would be more detailed because it does 16 pixels per CPU core but after doing some research we found out that the highest (512) is faster. So if you use 512 and have a quad core with 8 threads, you will see 8 render blocks trying to render 512 pixels each in one go. What we don’t want set higher is our AA setting. 1x AA is enough and the difference to a 4x AA is minimal.

The same image has been rendered once with an AA setting of 1x the other of 4x. The 4x AA took 43 minutes to render! The 1x AA took only roughly 3 minutes. The difference is minimal and doesn’t justify the time. Plus also once our map gets reduced in size, like from a 2048 map to a 1024 Photoshop applies AA to image as it gets scaled down


I found I wasn’t getting quite the result with my mesh, the below padded look seemed to keep appearing in my normal map. I asked John Hannon (resident wiz kid) and he was able to explain what I had to to.


The mattress monster.

John explained I had to smooth my normals. I could this by going to mesh display- unlock normals. I then had to smooth the edges of the character and then harden the outer edges of the uv shell.

This worked and I was able to create a bake on the character.

Screen Shot 2017-03-19 at 21.33.08.png

I then added the character’s colouring and increasing the sub surface scattering amount to  give more of the sculpted look.

You can see here I started to work on the blend shapes for Jack to created the facial rig, I’ll  talk about these in a separate blog post.

Meeting 3- feedback and changes

So we had our meeting with Jude, with a general update on were we were at.

We first showed Jude the vectors that Robert had created (see below).




Jude really liked these- especially the octopus. He did give a few changes for each though.

Gorilla- change the blue to a dark blue that is incorporated in their colour palette, simplify the stage/ remove the tassels and change the red. The gorilla should be closer and bigger

The 3D models themselves

The dinosaur- texture on the skin, thumb prints and little squishy look, the eyes on one side and not integrated

Octopus- the current model is too alien, and want us to simplify to the graphic look,  4 legs, tentacles look more simple and like an arm. Cassie will talk about this later, but while modelling we noticed it was hard to create the exact look we were going for, with keeping the topology on the one character. Instead we decided to keep the limbs separate, much like the Ben and Jerry’s advertisement for One Sweet World (below).

Ben and Jerry’s One Sweet World (YouTube, 2017).

Chicken/Hen- Jude loved this model, and wanted us to explore texturing him more with wood textures, staining with the colours from their colour palette.


Texture Tests- Dinosaur

One of the things Jude mentioned in our meeting was he wanted to see a texture test for the Plasticine material for the dinosaur to see how the material would look with the light.

This task was asigned again elsewhere but I wanted to make sure it looked perfect, I also wanted to learn more about shader creation through the likes of zbrush, substance painter and xnormal.

I found different processes for creating this smudgy like look in zbrush. It would involve having the character completely UV’ed and in low poly- and then taking thing low poly version into zbrush again, from this, the clay and clay build up tools can be used to create this look.


I looked at mainly Aardman for the specific look. Their characters being make from Plasticine.





A tester of the shader is below, however, for the actual dinosaur, I want to focus on the softer/ modelled look. I also will look into the creation of alphas- to make finger prints.


Modelling the Gorilla

Due to circumstances regarding our last presentation we decided to make an executive decision, this was that one person focus would on the 2D vectors to be completed for our 2D gifs for MashMob’s website.

This would allow for consistency in style and to ensure the work was finalised in time, to allow for constructive critic from Jude. Robert offered to do the 2D work, so I therefore decided to take on the Gorilla model to save time.

Our idea was to keep him vectorised and flat, so that’s how I modelled him, without the need for a full topologised model. I noticed, after watching the gif on their website, that the character was only partially 2D- his arms in fact 3D allowing for the rotational movement that is demonstrated in the gif.


Stop Motion- Aardman and the Likeability

One of the things I wanted to look into more for this project was the likeability with characters, specifically Aardman, and what makes their work prolifically popular with their audiences.

An Interview with Aardman founder Peter Lord

In the interview Lord was asked, with the rise in CGI animated films, what is it about the hand-made style adopted by Aardman that still remains so popular?

Lord replied that is was to to do with the  “warmth… and intimacy, like the viewer shares in the experience that little bit more.Most of us have some sort of memory from childhood playing with cowboys, dolls or puppets, things that you give life through play, and that’s one of the things we do. The viewer knows it’s hand-made, they know it’s a puppet, but they believe it’s alive at the same time. There’s magic in there somewhere. I’m not knocking computer animation…Whereas ours is a feat of human ingenuity with touch and feel that people seem to care about. In Britain today, there are three stop motion films being made. We’re making one in Bristol, there’s one being made in London by the same team that made Fantastic Mr. Fox and one made in Wales by Michael Mort. That, twenty years ago, was unthinkable. It’s great that there’s an appetite for it.” (Lickley and Lickley, 2017).

An interview with creator Nick Park

Park describes how he discovered his medium of choice.

“I suddenly saw what a magical effect it had. Everybody knows what a lump of clay is and seeing it come to life is quite a magical thing. You can see the material and see it moving and suddenly gaining a character somehow,” he says. (Gibson, 2017).

Shaun the Sheep


Shaun the sheep and co. (Gritten, 2017).

Shaun the sheep is a show more aimed at families, and has developed a following in many places including Cairo, Saudi Arabia, Qatar, Australia and Asian. In Japan, arguably the epicentre of Shaunmania, an exhibition about the little sheep’s world toured major cities – in Tokyo, it drew 30,000 people in five days.

The reason for its popularity compared to Wallace and Gromit is quintessentially British, but slightly old fashioned with humour stereotypical of British culture.This is different for Shaun, the only sound to issue from him is ‘baa’ – no cultural obstacles or language barriers for the rest of the world there.

Yet in his British homeland he has yet to achieve the same kind of recognition or breakthrough he enjoys abroad. ‘Shaun is easily Aardman’s biggest global brand,’ Clarke says. ‘We’re [broadcast] in 170 countries. Don’t ask me to name them all. But you’d never know it from this country.’

His success, as described by Nick Park, is due to him being ‘cute and cool at the same time.’ He also believes his simple shapes help him with merchandising,.

Starzak, Burton and Kewley watched several silent films, some by Jacques Tati, who famously used sound as a way of telling a story. Burton says, ‘On a practical level Shaun can’t do much with his face, but then again, that’s the Buster Keaton approach to comedy – slapstick and deadpan combined.’ Starzak also feels they were influenced by Pixar’s film WALL.E. ‘It had over 30 minutes without [human] dialogue. And everyone I know thinks that’s the best part of the film. (Gritten, 2017).

Animators also watched films like The Artist and Mr Bean to create this humour. This is another thing we could look at for the acting and timing of our jokes/ puns.

This applies to our own ident- there is no dialogue at all throughout the series.


Gritten, D. (2017). How Shaun the Sheep became a global phenomenon: behind the scenes at Aardman. [online] Available at: [Accessed 22 Mar. 2017].

Lickley, P. and Lickley, P. (2017). An interview with Aardman Animations founder Peter Lord – The Bradford Review. [online] The Bradford Review. Available at: [Accessed 22 Mar. 2017].

Gibson, O. (2017). Interview: Nick Park, Oscar-winning creator of Wallace and Gromit. [online] the Guardian. Available at: [Accessed 22 Mar. 2017].