After Effects- Blend Nodes

When looking into After Effects compositing blend nodes is the main of adding the elements together. Alec talked about these briefly in class and I wanted to explore what each of these meant.

Screen- a blending mode to get rid of the dark parts of the image. The screen blending mode allows us to composite elements on a black BG in the scene. You will typically use this blending mode in a VFX context as most VFX elements usually come either pre-keyed or on a black background.

Multiply– the opposite of the screen. It takes out the light parts of your image.

Add– this mode is like screen, leaving the highlights of the image while removing the dark parts. However, adds its colour values to the image below causing the image to greatly increase in brightness.

Colour Burn– these transfer modes darken images. However, colour burn differentiates itself in a way that it blends with the background. As the name implies it creates a burned look, making it great for grunge and vintage looks. Highlights are retained when using the color burn effect. Color Burn is typically used to add a dirty vintage effect.

Overlay– Overlay changes the color of the mid-tones while preserving the light and dark parts of your image. Overlay is typically used to add stylized elements into your composition.

Soft Light-very similar to the Overlay transfer mode. However, Soft Light tends to be very subtle whereas Overlay is more noticeable.


The Beat: A Blog by PremiumBeat. (2017). 6 Useful Blending Modes in After Effects. [online] Available at: [Accessed 9 Jan. 2017].

Track Test- PFTrack

I did a test track with PFTrack using the various tutorials I had watched.Unforunately I could not export the track to Maya due to the licensing on the educational software. If someone wants to give me £3,000 I’d happily accept it.

Below is a screen record of the track itself.

I used the 2016 version of PFTrack and I was really impressed with the results. In different tutorials and from other people’s person experience- the auto track was deemed terrible and the user track was needed to fix this. However, when I applied the auto track it seemed to work ok. I included some of the different stages in making this track happen.

Image Manipulation 

I used this node to remove some of the noise in the footage. I first worked on the filter- increasing the sharpen radius to 2. I then worked with the colour channels. I found the blue the noisiest channel, followed by the Red and then Green. I had to increase the red and green levels and the blue slightly. Below are the different channels on these.

I also increased the contrast and decreased the gamma. I then cached the footage to ensure it saved on for the next steps.


Auto Track

I then tested the auto track setting the deformation to better accuracy and the deformation to rotate, scale.



Track testing. 

I used the thumbnail objects as suggested in the Digital Tutors tutorial. The do not shift or shake which is a good sign the track was a success.

Screen Shot 2017-01-06 at 10.41.45.png


Creating a Better Track

Andrew Coyle’s blog directed me to this video showing how to create a cleaner track, using user track in PFTrack.

User Tracks (YouTube, 2016).

The first step is to create an auto track like previously shown in the DT tutorials. The next step is to create a user track. To create a track you click on an area of the footage you want to track- such as the dots in the image.

The window of the tracker needs to be made smaller as with the movement the track can skip outside the area.

Screen Shot 2017-01-06 at 13.00.19.png

Screen Shot 2017-01-06 at 13.01.41.png

Click the double arrow button under the canvas area (the track forward). You can click stop during this process to move the tracker back onto the dot.

The tutorial explains there are two different tracks in PFTrack- these are the hard and soft track. Soft are automatically done by the the computer itself whereas the hard tracks are done manually.

The rest of the process follows the rest of the processes I have seen before- the exporting of the scene etc.


YouTube. (2017). PFTrack 2011 Tutorial – User Tracks. [online] Available at: [Accessed 6 Jan. 2017].

PFTrack- Digital Tutors

First look at PFTrack


Tree view- node based trees with pipeline.

Middle rectangle- canvas giving ability to view what you are looking at.

First things first- create a new project. Do this by selecting create button under the canvas window.

Screen Shot 2017-01-04 at 09.47.24.png

If you click create again- the screen will change to the below image.

Screen Shot 2017-01-04 at 09.48.48.png

On the LHS we still have the tree view, the middle is the media bins and then the navigator. The navigator moves through the pipeline allowing you to chose where you pick the footage from. N.B. PFTrack does not have a save button but does automatically save the file for you (yay).

Accessing PFTrack’s Preferences 

Setting up the application to run best with the machine. To access the menu to do this click the spanner/ clog icon at the top RH corner.

In the cache option- once the bar reaches 100%, click the clear cache button. If you have a computer with high specs, you can up the cache to 4GB.

In the General, you can tell the software how many CPU’s to use. If you go to USE- select the drop to chose, you can type in for e.g. 1/8 to allocate how many you want the computer to use. The next is the undo box- setting how many undos you can use- set to 20.

You can also edit the scene units- i.e. in centimetres, feet etc.

In the cinema tog you can change the view of the trackers. You can change the tracker colour and the shape.

In export- there is the having path. Where a project is set there is a sub folder named export. You can also alter the type of file (JPEG, TIF etc).

Using the Media Admin Menu

Click the two page icon on the canvas viewport. Then find the footage- click and drag to the tree port. The settings for the footage are loaded into the window below.

N.B. click the C on the timeline to cache the footage to the timeline. In the settings cache menu- click on the cache menu and see what the percentage is. If it is at 99%- clear the cache.

Screen Shot 2017-01-04 at 11.24.54.png

Next look at the media admin menu. You will see two loaded clips- the bottom being the original and the top being the cached one. If you look to the RHS you will see a green lit ‘OK.’ This ok means that the software can locate the footage and read it.

Going back to the footage- the tutorial addressed two problems that are common in tracks. The first being lens distortion. This causes problems as tracker points are put onto the distortions- giving false tracking information. The second problem is that the picture is a bit blurred. Blurry footage gets rid of trackers and contrast- things that they like to be attached to. This is fixed with corrections inside the tracker.

Click create- showing all the nodes in the editor. Click on the image manipulation node. The tutorial demonstrated that if in the filter option in this node- if you tick it and try to fix the sharpen radius, it removes the cache from the footage. The colour can also be turned on to alter. Which channel is the cleanest to use? The RGB channels are accessed through the three dots under the Tree view.

Screen Shot 2017-01-04 at 11.41.10.png

In the example above the green is the cleanest and the blue is the nosiest. Therefore, the greens and reds need upped more, and the blues slightly. The next step it to also up the saturation and reducing the gamma. To reset any of these, click the R beside the levels.

Screen Shot 2017-01-04 at 11.45.00.png

Next, go back to the create and go to the undistort node. This node will allow you to match the lines up with lines in the footage. The screen will appear as below.Screen Shot 2017-01-04 at 11.53.58.png

To show the computer the level of distortion- the example used here was the lamppost. Drawing a line at the top and bottom of the post, and then putting more in the middle to allow the bend, will give the amount of alteration needed.


Once this is done, click the solve button and the computer will solve them. Once this is done, go ahead and cache the footage again.

Tracking Footage Manually and automatically

Going back to the create option to show the node menu- we have two options. Using an auto track or a manual track. The different settings are as follows;

Search mode- better accuracy

Deformation- scale and rotation (due to being a hand held camera)

Consistency- free camera (as it is handheld)

Failure Threshold- setting a lower value (i.e. 0.5) for problematic footage is better- any tracker outside 0.5 will be a bad translation and will not be made. Blur image also helps if there is too much noise. Clicking auto track allows these to be tracked. The software tracks forward then backwards to average out the values of the trackers.

Auto tracker is not good in the fact that is a soft track- trackers do not stay true to their point value.

In this case- user track will have to be used.

Solving The 3D Camera

Click on the camera solver node.

We can tell software which is a hard tracker or not (true to the origin). If the camera is shaky you are best to set the translation and rotation to medium smooth. After this, select solve all.


After go to the errors tab. Any tracker over 1 has high error margins. Select trim.

Screen Shot 2017-01-04 at 15.03.02.png

Go to the 2H button to see how the solve will look in 3D space- holding command will allow you to rotate around this.

Screen Shot 2017-01-04 at 15.05.10.png

Orienting our Scene and Test Objects

As scene in last tutorial the camera with the point cloud is not sitting on the ground. The ground (grid) is floating and the horizon line of the grid does not match that of the footage.

Using a orient scene node to adjust the scene to the footage. In the edit mode box on the RHS- you can change the different orientation modes (rotate, resize, translate etc).

Once matched up- a test object can be used to test the legit-ness of the track. The tutorial used a thumb tack as an example. These are added to the trackers by clicking on the tracker, then clicking on the marker. After one, the duplicate button can be used to mark other trackers.

Screen Shot 2017-01-06 at 12.25.01.png

Exporting the Scene to Maya

To export create an export node then select the type of file. Before exporting ensure in the objects mode the thumbtacks are taken off so they don’t go too. Finally select export to export the camera.


When opening the scene in maya it is important t0 ensure the size of the scene matches that of the track (cm, m, ft etc). Once the camera is brought into maya, increase the tracker size to 0.5 to make them more visible. In this example the trackers do not match to the  original footage, so the altered footage from PFTrack also has to be uploaded to the camera as a camera plane.





Camera Tracking in After Effects

For the next part of my compositing I wanted to have a go at creating a camera track- using After Effects as it was available for me to use. I found tutorials online, on how to doing a basic track in AE then bringing it out to Maya to incorporate it in.

With a footage layer selected I did the following; chose effect> perspective > 3D tracker.In the tracker panel and click the track camera button. The 3D Camera Tracker effect is then applied. The analysis and solving phases occur in the background, with status appearing as a banner on the footage and next to the Cancel button.

They solves the footage, giving little coloured X’s on the screen

Attaching content into the scene involving a camera

  1. With the effect selected, select the track point or multiple track points (defining a best-fit plane) to use as the attach point.
    1. Hover between three neighboring unselected track points that can define a plane, a semitransparent triangle appears between the points. A red target appears, showing the orientation of the plane in 3D space.
    2. Draw a marquee-selection box around multiple track points to select them.
  2. Right-click above the selection or target, and then choose the type of content to create. The following types can be created:
    • Text
    • Solid
    • Null layer for the center of the target
    • Text, solid, or null layer for each selected point
    • “Shadow catcher” layer (a solid that accepts shadows only) for the created content by using the Create Shadow Catcher command in the context menu. A shadow catcher adds a light to the scene, if none exists.
Moving the target to attach content to different location 

To move the target so that you can attach content to a different location, do the following:

  1. When above the center of the target, the “move” cursor appears for repositioning the target.
  2. Drag the center of the target to desired location.

Once at the intended location, you can attach content by using the commands in the context menu.

Note: If the size of the targets is too small or too large to see, you can resize them to help visualize the planes. The target size also controls the default size of text and solid layers created using the context menu commands.

Effect controls for the 3D camera tracker 

The effect has the following controls and settings:
Starts or stops the background analysis of the footage. During analysis, status appears as a banner on the footage and next to the Cancel button.
Shot Type
Specifies whether the footage was captured with a fixed horizontal angle of view, variable zoom, or a specific horizontal angle of view. Changing this setting requires a resolve.
Horizontal Angle of View
Specifies the horizontal angle of view the solver uses. Enabled only when Shot Type is set to Specify Angle of View.
Show Track Points
Identifies detected features as 3D points with perspective hinting (3D Solved) or 2D points captured by the feature track (2D Source).
Render Track Points
Controls if the track points are rendered as part of the effect.
Note: When the effect is selected, track points are always shown, even if Render Track Points is not selected. When enabled, the points are displayed into the image allowing them to be seen during preview.
Track Point Size
Changes the displayed size of the track points.
Create Camera
Creates the 3D camera. A camera is automatically added when you create a text, solid, or null layer from the context menu.
Advanced controls

Advanced controls for the 3D camera tracker effect:

  • Solve Method: Provides hints about the scene to help in solving the camera. Solve the camera by trying the following:
    • Auto Detect: Automatically detects the scene type.
    • Typical: Specifies the scene as that which are not purely rotational, or mostly flat.
    • Mostly Flat Scene: Specifies the scene as mostly flat, or planar.
    • Tripod Pan: Specifies the scene as purely rotational.
  • Method Used: When Solve Method is set to Auto Detect, this displays the actual solve method used.
  • Average Error: Displays the average distance (in pixels) between the original 2D source points and a reprojection of the 3D solved points onto the 2D plane of the source footage. If a track/solve was perfect, this error would be 0 and there would be no visible difference if you toggled between 2D Source and 3D Solved track points. You can use this value to tell if deleting points, changing the solve method, or making other changes is lowering this value, and thus improving the track.
  • Detailed Analysis: When checked, makes the next analysis phase do extra work to find elements to track. The resulting data (stored in the project as part of the effect) is much larger and slower with this option enabled.
  • Auto-delete Points Across Time: With the new Auto-delete Track Points Across Time option, when you delete track points in the Composition panel, corresponding track points (i.e., track points on the same feature/object) are deleted at other times on the layer. You don’t need to delete the track points frame by frame to improve the quality of the track. For example, you can delete track points on a person running through the scene, whose motion should not be considered for the determination of how the camera was moving in the shot.
  • Hide Warning Banner: Use when you don’t want to reanalyze footage even though there is a warning banner indicating that it be reanalyzed.

Ground plane and origin in 3D Camera Tracker effect 

You can define a ground plane (reference plane) and origin, for example, the (0,0,0) point of the coordinate system within the 3D Camera Tracker effect.

  1. Analyze the scene using the 3D Camera Tracker effect
  2. Select a set of tracking points. This action causes the bullseye target to appear, showing the plane defined by the selected tracking points.
  3. Optionally drag the target by its center to reposition it along the plane, and place the center is where you want the origin to be.
  4. Right-click (Windows) or Control-click (Mac OS) the target and choose Set Ground Plane And Origin.

This action does not have any visible result, but the reference plane and origin of the coordinate system are saved for this scene. Any items that you create from within this instance of the 3D Camera Tracker effect are created using this plane and origin.

Creating the track. (Vimeo, 2017)
Camera Track Plug in- Exporting the track to Maya
One of the biggest challenges, seemingly, would be exporting the camera track to Maya (or the 3D package as I keep hearing). After searching the internet, I found lots of guidance towards a plug in created by Ryan Gilmore.
The above tutorial goes through the steps of how to create a track from footage (as described above).
Vimeo. (2017). After Effects New Features – Set Origin on 3D Track. [online] Available at: [Accessed 3 Jan. 2017].

Colour Grading my Composite


Above is my composite before the addition of an colour grading.

As seen before in a previous post, I started by creating a vignette as a black solid- lassoing it to focus on the character, the black applied to the edges of the image. The tutorial I was  watching didn’t really go into detail on how to create this, so I did further snooping on Adobe’s website.


Below is my own AE vignette creation- I played around with the feathering and opacity of the mask to create the desired effect.


Once I was happy with this, I then went on to adjust the saturation using an adjustment layer set to hue/saturation.

After this I wanted to alter the colouring of the image more- through the tint effect in the adjustment layers.


This was my original grading but I found it too dark- I moved in to work on the opacity of each layer.


This is my final colour grade- I am leaving it for now so that I can attempt to recreate this in NUKE.

The creation of Click

So, I created my own little creature- however, unlike Gerald Dunleavy, I wanted to do something different. I wanted to create a character from a house hold object- already having on in mind. I took inspiration from different sources to come up with Click (my character).

One of my biggest inspirations was Digi from Jam Media’s pilots for Snoozeville. I really liked his long dangly limbs- moving in a wavy fashion.


Digi from Snoozeville (the alarm clock). (, 2017).

I also looked at Beemo from Adventure time for the more simplified face- to have it played on the screen like both characters above. I went with a TV- wanting to give it that clay look like shown in Snoozeville and keep the shapes primary too- like a kids toy.


Beemo from Adventure Time. (Cartoon Network, 2017).

Below is my little character (bum-da-da-dum!). I am actually quiet happy with how he came out and hope I can match him into my scene ok!


References (2017). JAM Media | Animation with Humour and Heart. [online] Available at: [Accessed 1 Jan. 2017].

Cartoon Network. (2017). Cartoon Network | Free Online Games, Downloads, Competitions & Videos for Kids. [online] Available at: [Accessed 1 Jan. 2017].



Colour Grading After Effects

So I had a it of problem with me AE file. Trying to composite the layers together was giving a totally different colour than I wanted. Alec suggested I look at colour grading the overall composition, or each of the passes subtly. I wanted to have a further look into this, given I had already colour matched the saturation and black/ white values.

So, what is colour grading?

Color grading is the process of altering and enhancing the color of a motion picture, video image, or still image either electronically, photo-chemically or digitally. Color grading encompasses both color correction and the generation of artistic color effects. Whether for theatrical film, video distribution, or print, color grading is generally now performed digitally in a color suite. The earlier photo-chemical film process, known as color timing, was performed at a photographic laboratory. (, 2016).

Below is a video explaining the different uses of colour grading in film. It explains the use of colour palletes and how they are used in film and why- the likes of Transformers has a consistent one  throughout the film, whereas Black Hawk Down is always changing and evolving.

Colour grading and film. (YouTube, 2016).

How to Colour Correct/ Grade in AE. (YouTube, 2016).

  • Levels- Keeping on stock RGB (luma)- keeping the darks further, as the image is flat, and the same with the highs (make sure not to blow it out the lights or crush the blacks.) Move the gamma to middle to adjust. Don’t use the below- controls overall transparency.
  • Colour wheel- biggest thing being primary and secondary colours in colour correcting- blue to yellow then red, and orange to green and violet. Common complimentary correction is blue in the shadows and yellows in the highlights. This is done in curves, this is a bit more visual than using levels. The further the curve pulled down the darker it gets, the higher the lighter. To change the blue in the shadows, change to curve from RGB to blue, using the lower piece of the curve, move it up a notch. IMPORTANT- move the middle back to the centre, so it doesn’t change the highlights too. For the top of the curve, move it down subtly.


Colour Grade any Footage. (YouTube, 2016).

  • Tint- use of tint to give a different feel to the image.

I wanted to create a more stylistic lighting look in my composite- to push the colours in the character compared to those of the black and white background photo.

Creation of a vignette- black solid, creating a mask using the lasso tool and setting the type to subtraction so it only effects outer edges. This allow more pop in the character/ man playing the guitar

Adjustment layer 1 – top- first to approach- saturation to -19 and an opacity of 52%. Red and greens calm down more.

Adjustment layer 2- using a look layer with 52% opacity- default setting makes it look more saturated.


References (2016). Color grading. [online] Available at: [Accessed 24 Dec. 2016].

Ward, C. (2017). 8 Methods for Color Grading in After Effects. [online] RocketStock. Available at: [Accessed 2 Jan. 2017].


Alec to the Rescue- Passes in Arnold

Alec sent me a few articles when looking at my compositing- including the one on AOVs below.

Basically, it talked about how different tutorials show different ways of recreating the Beauty from the different render passes (diffuse, specular etc). Many tutorials explain different ways of doing this for different renders, but these have different ways in creating the 100% replica of the beauty pass.

To create an mathematically accurate rebuilding of the Beauty you must use only Plus operations, if your using Screen (which Clamps) or Multiply (which can create dark edges) you doing things wrong.

Below is a list compiled of the 100% rebuild of the beauty from each of the main render engines- believed to be 100% pixel/ colour accurate.


SpecularDirect – SpecularDirectShadow + SpecularIndirect + SpecularEnvironment + Ambient + DiffuseDirect + Translucence – DiffuseDirectShadow + DiffuseIndirect + DiffuseEnvironment + Backscattering + Subsurface + Rim + Refraction + Incandescence = Beauty


direct_diffuse + indirect_diffuse + direct_specular + indirect_specular + refraction + deep_scatter + mid_scatter + shallow_scatter + primary_specular + secondary_specular = Beauty


GI + Lighting + Specular + Reflection + Refraction + SelfIllum + SSS = Beauty

Mental Ray

Diffuse + Indirect + Specular + Reflection + Refraction + Incandescent + Scatter = Beauty

What about the Shadow and Ambient Occlusion Passes?

Additional conciderations were addressed when mentioning the shadow pass- what if the shadow was needed to be changed. As I have mentioned previously, you should only be recolouring the shadow, nothing else, if the 3D was done correctly. If colour adjustments need to be done, simply make the adjustment and plus it back in.

When regarding the Ambient Occlusion pass, this is used to cheat using Global Illumination, a expensive to render pass. Nowadays, the hardware and renders have got much faster and doing full GI is no longer a issue so long as you optimise your render settings correctly.

Camera Projection- Basics

Another method used in compositing I wanted to try was that of camera projection- this was shown earlier in a show reel we looked at- showing how the lighthouse/ robot short was created.

Camera projection is defined as the as any method of mapping three-dimensional points to a two-dimensional plane. I found a gorgeous example of this while looking over show-reels in Greg’s class- a breakdown of different buildings with snow composited into it.

The clip that inspired me to look further into Camera Projection. (YouTube, 2016).

Basic Camera Projection Skills

Basic Camera Projection Tutorial. (YouTube, 2016).

The tutorial above begins in photoshop- separating the box from the background. The gap is them cleaned/ treated using the clone tool.

In Maya, the orientation grid is matched up to the angle of the box. A cube is then created and sized to match that of the box in the photo. A plane is them created to act as the ground- again matching with the image.

For both the ground and the box- the previously altered images in photoshop are added as textures by creating a new texture- file- file as projection.

The shapes can be altered and so can the texture attributes if they do not match up correctly.



YouTube. (2016). Maya Camera Projection Tutorial, Camera Mapping: Must Watch. [online] Available at: [Accessed 19 Dec. 2016].–cms-19804