This semester has been a huge learning curve for me, giving me a huge confidence leap towards exploring different softwares and working outside of my comfort zone. I have realised that I do enjoy the 3D elements I am working towards and hope to get better especially with Zbrush. I am finding that I am immensely enjoying character design and texturing, loving creating characters to go alongside the briefs given.
It has taught me a lot with interaction with clients and how to work with those with changing visions or ideas. I enjoyed working for MashMob and definitely will consider working there for my own placement in the upcoming year.
Here is to placement year- hoping I find one. If not, I have a few tricks up my sleeve.
Below is the final animation video we are bringing to MashMob. There are a few things we need to fix- notably animations on both the chicken and the octopus, as they were keyframed in the wrong position. The chicken was pretty easy to fix itself, as once Robert added a further control we could move the character into position. There was slight shifting in the feet- something we will need to look at fixing at a later date.
The octopus, on the other hand, was a different story. Despite a locked camera and environment- the character was animated through the shelves. This resulted in us having to render the octopus on a separate pass, however the lighting came into play here, making it look off. This is definitely something that will need fixed, as it currently looks terrible.
Posted this image online and received some feedback on Mr Robot. Michael and Eric (my two unofficial after school mentors) gave some suggestions on his mouth to fix the stretch of UV’s.
Michael suggested that with using Substance Painter to add two more subdivisions to the model before painting, stating “It’s a pretty good way of compensating for subdivisions effecting your uvs at render time.”
Michael also explained it was an artefact known as Catmul-Clark subdivisions, created as these only preserve border UV’s. He commented that it was more than likely a stretch UV seam.
I wanted to know what Catmul-Clark subdivisions are so had a look online. Literally I already knew what it was- just not the actual name.
Catmul-Clark subdivisions- technique used in computer graphics to create smooth surfaces by subdivision surface modelling. (Wiki, 2017).
I listened the the advice, upping my subdivisions and therefore removing this artefact.
Ta da. Now onto the rendering.
Looking into how to use design with Robotics I found the below talk which was super interesting, from one of my favourite shows, The MythBusters.
Design- Robots with Grant Imahara (YouTube, 2016).
-Using design to use the time you have- to be more efficient and get more done
-planning in design. First robots are, by nature, complicated.
-pay a little more attention to the early stages of design process, before doing this, spend time to make a study model. CAD or popsicle sticks. Using a model to test- save materials.
– TEAM 1.11- Wildstang and their lifting arm
Define- understanding the problem. Requirements and restraints are the problem at hand. Hand sketches used to identify critical aspects (in this case dimensions of the robot).
-Ideate- brainstorming. Based robot arm on a human arm as a result. Linkages were used as bones and power outages used as muscles.
-Create- building. Using CAD to built a virtual diagram- allows to be split into different parts. These parts were divided among other sub team. This allows to identify problems- using the CAD model to fix these problems, before applying to the real work model.
– Solve- manufacturing. These CAD models provided blueprints for the actual build.
Review- make you model, work out the ideas and then make the parts.
-more realistic facial expressions, with a lower battery
-goal to create machines with empathy
-facial expression technology- where you looking, head orientating.
-Using the character engine- these machines recognise the expressions being made and then co-ordinates a response.
-involving two things- perception of people and more intuitive interface
Below are some of my test renders using Substance Painters inbuilt render service- IRAY.
One of things I wanted to look into in Substance Painter was adding a decals to my robot- especially on the Walkman device on his neck.
I found it actually not to be that hard of a process- starting with importing files into the scene to be useable of alphas. I then turned on the projection tool- dragging the alpha I needed into the material tab. I resized the alpha on top of the uv menu and then painted it in- creating the design I needed.
Decal Painting. (YouTube, 2017).
Initially, to test out how my materials would look in Substance Painters built in render engine- IRAY.
I actually found it really easy to use, and not so render ‘heavy’ considering my laptop could cope with it. I found it also a lot easier to handle than Arnold- not as much noise present when rendering and it was fairly easy to remove the noise itself.
This is one of the biggest stumbling blocks for me- the conversion of substance painter textures to Maya. Trying to convert the files using Arnolds standard AI materials requires a lengthy process to do so. Doing further research, I found an easier technique by Nick Deboar.
Exporting the textures-
Deboar the Arnold export set up but altered it. First he removed the FO output and created a metallic one. He did this by selecting to create a new grey output. Naming this the same naming conventions as the others with metal added on the end. I copied the type of file exported- in this case EXR.
N.B. when exporting EXR- most software makes them linear, however substance painter does not, so they are still SRGB and you have to treat them as a jpeg or tif, so a colour space conversion is required.
Assigning the Textures
Diffuse- this is kept as RGB and simply imported
Specular- this is imported and kept as sRGB.
Roughness- import and change to RAW, as we only want to do gamma corrections on our colour images. Also, in the colour balance menu ensure alpha is luminance is checked. Why? In the node editor is is an out alpha and if it has no alpha channel, it is going to use the luminance of the RGB instead.
N.B. set distribution in advanced to GGX. This is because in substance they use the disney BRDF which is a GGX specular model.
I.O.R.- input a set range node into the colour map, doing this to remap the metal map onto some IOR values. In the output select the out value as X and the the input as specular1IOR. Then alter the values as below.
Then in the node editor connect a texture node with the metal file loaded (with the colour value X) to the value x of the set range.
Normal maps- no difference here than normal. Insert the normal map, uptick the flip R and flip G channel, change use as to tangent spaced normals and then import the normal map set to Raw.
Height map- As the bump map is already filled, another has to be added in the node editor.
This is done by inserting a aibump2d node between the shader and the alsurface node. connecting the colour output to the shader input on the aibump node and the connecting the out colour value to the surface shader input on the surface shader node.
I applied this all an successfully managed to achieve a close match to the original renders.