Posted this image online and received some feedback on Mr Robot. Michael and Eric (my two unofficial after school mentors) gave some suggestions on his mouth to fix the stretch of UV’s.
Michael suggested that with using Substance Painter to add two more subdivisions to the model before painting, stating “It’s a pretty good way of compensating for subdivisions effecting your uvs at render time.”
Michael also explained it was an artefact known as Catmul-Clark subdivisions, created as these only preserve border UV’s. He commented that it was more than likely a stretch UV seam.
I wanted to know what Catmul-Clark subdivisions are so had a look online. Literally I already knew what it was- just not the actual name.
Catmul-Clark subdivisions- technique used in computer graphics to create smooth surfaces by subdivision surface modelling. (Wiki, 2017).
I listened the the advice, upping my subdivisions and therefore removing this artefact.
Ta da. Now onto the rendering.
Looking into how to use design with Robotics I found the below talk which was super interesting, from one of my favourite shows, The MythBusters.
Design- Robots with Grant Imahara (YouTube, 2016).
-Using design to use the time you have- to be more efficient and get more done
-planning in design. First robots are, by nature, complicated.
-pay a little more attention to the early stages of design process, before doing this, spend time to make a study model. CAD or popsicle sticks. Using a model to test- save materials.
– TEAM 1.11- Wildstang and their lifting arm
Define- understanding the problem. Requirements and restraints are the problem at hand. Hand sketches used to identify critical aspects (in this case dimensions of the robot).
-Ideate- brainstorming. Based robot arm on a human arm as a result. Linkages were used as bones and power outages used as muscles.
-Create- building. Using CAD to built a virtual diagram- allows to be split into different parts. These parts were divided among other sub team. This allows to identify problems- using the CAD model to fix these problems, before applying to the real work model.
– Solve- manufacturing. These CAD models provided blueprints for the actual build.
Review- make you model, work out the ideas and then make the parts.
-more realistic facial expressions, with a lower battery
-goal to create machines with empathy
-facial expression technology- where you looking, head orientating.
-Using the character engine- these machines recognise the expressions being made and then co-ordinates a response.
-involving two things- perception of people and more intuitive interface
Below are some of my test renders using Substance Painters inbuilt render service- IRAY.
One of things I wanted to look into in Substance Painter was adding a decals to my robot- especially on the Walkman device on his neck.
I found it actually not to be that hard of a process- starting with importing files into the scene to be useable of alphas. I then turned on the projection tool- dragging the alpha I needed into the material tab. I resized the alpha on top of the uv menu and then painted it in- creating the design I needed.
Decal Painting. (YouTube, 2017).
Initially, to test out how my materials would look in Substance Painters built in render engine- IRAY.
I actually found it really easy to use, and not so render ‘heavy’ considering my laptop could cope with it. I found it also a lot easier to handle than Arnold- not as much noise present when rendering and it was fairly easy to remove the noise itself.
This is one of the biggest stumbling blocks for me- the conversion of substance painter textures to Maya. Trying to convert the files using Arnolds standard AI materials requires a lengthy process to do so. Doing further research, I found an easier technique by Nick Deboar.
Exporting the textures-
Deboar the Arnold export set up but altered it. First he removed the FO output and created a metallic one. He did this by selecting to create a new grey output. Naming this the same naming conventions as the others with metal added on the end. I copied the type of file exported- in this case EXR.
N.B. when exporting EXR- most software makes them linear, however substance painter does not, so they are still SRGB and you have to treat them as a jpeg or tif, so a colour space conversion is required.
Assigning the Textures
Diffuse- this is kept as RGB and simply imported
Specular- this is imported and kept as sRGB.
Roughness- import and change to RAW, as we only want to do gamma corrections on our colour images. Also, in the colour balance menu ensure alpha is luminance is checked. Why? In the node editor is is an out alpha and if it has no alpha channel, it is going to use the luminance of the RGB instead.
N.B. set distribution in advanced to GGX. This is because in substance they use the disney BRDF which is a GGX specular model.
I.O.R.- input a set range node into the colour map, doing this to remap the metal map onto some IOR values. In the output select the out value as X and the the input as specular1IOR. Then alter the values as below.
Then in the node editor connect a texture node with the metal file loaded (with the colour value X) to the value x of the set range.
Normal maps- no difference here than normal. Insert the normal map, uptick the flip R and flip G channel, change use as to tangent spaced normals and then import the normal map set to Raw.
Height map- As the bump map is already filled, another has to be added in the node editor.
This is done by inserting a aibump2d node between the shader and the alsurface node. connecting the colour output to the shader input on the aibump node and the connecting the out colour value to the surface shader input on the surface shader node.
I applied this all an successfully managed to achieve a close match to the original renders.
I wanted to look further into the making of Wall.E (2008) through the use of Pixar’s production notes. It is amazing to see how much has been altered or changed to fit the director’s original vision.
“One of the things I remember coming out of it was the idea of a little robot left on Earth,” says Stanton. “We had no story. It was sort of this Robinson Crusoe kind of little character — like what if mankind had to leave earth and somebody forgot to turn the last robot off, and he didn’t know he could stop doing what he’s doing?”
Years later, the idea took shape — literally. “I started to just think of him doing his job every day, and compacting trash that was left on Earth,” Stanton recalls. “And it just really got me thinking about what if the most human thing left in the universe was a machine? That really was the spark. It has had a long journey.”
In the foot notes I found it interesting that the team studied giant trash compactors and other machinery such as robots up close and in person. They also watched a huge variety of classical films for insight into cinematic expression. Pixar follow the line “truth in materials”- the animators using this in each robots design. Each had a function, and designers had to try and stay within the physical limitations of each design, while still creating personality.
“Our approach to the look of this film wasn’t about what the future is going to be like. It was about what the future could be — which is a lot more interesting. That’s what we wanted to impart with the design of this film. In designing the look of the characters and the world, we want audiences to really believe the world they’re seeing. We want the characters and the world to be real, not realistic looking, but real in terms of believability.”
I thought this was very relevant to the story of my own robot- the robot teacher/ professor. In a time were robots became the reliant source for anything, until a revolt leaving many of them (A.I.s) without jobs, kind of working for slaves.
Jim Reardon, head of story for “WALL•E,” observes, “What we didn’t want to do on this film was draw human-looking robots with arms, legs, heads and eyes, and have them talk. We wanted to take objects that you normally wouldn’t associate with having humanlike characteristics and see what we could get out of them through design and animation.”
Stanton explains, “We wanted the audience to believe they were witnessing a machine that has come to life. The more they believe it’s a machine, the more appealing the story becomes.”
Stanton notes, “In the world of animation, pantomime is the thing that animators love best. It’s their bread and butter and they’re raised on it instinctually. John Lasseter realized this when he animated and directed his first short for Pixar, ‘Luxo Jr,’ featuring two lamp characters who express themselves entirely without dialogue. The desire to give life to an inanimate object is innate in animators. For the animators on ‘WALL•E,’ it was like taking the handcuffs off and letting them run free. They were able to let the visuals tell most of the story. They also discovered that it’s a lot more difficult to achieve all the things they needed to.
“I kept trying to make the animators put limitations on themselves because I wanted the construction of the machines and how they were engineered to be evident,” he adds. “The characters seem robotic because they don’t squash and stretch. It was a real brain tease for the animators to figure out how to get the same kind of ideas communicated and timed the way it would sell from a storytelling standpoint, and yet still feel like the machine was acting within the limitations of its design and construction. It was very challenging — and completely satisfying when somebody found the right approach and solution.”
The rest of WALL•E’s design stemmed from functionality. “How does he get trash into himself and how does he compact it?” Deamer asks. Field trips were made to recycling plants to see trash compacting machines in action. “We knew he needed treads to go up and over heaps of trash,” he says. “He also needed to be able to compact cubes of trash, and have some kind of hands to gesticulate.”
The case of Wall-E is very different than Robots (2005)- due to the back story being entirely different. In robots- the characters are a separate sub section entirely, living as an entire robot society so are entirely human like in motion and nature, but with the robotic extras added. However, with Wall.E, it is apparent that this animatronics are built solely to be subservient to humans. I wanted to go with a mix of both of these plots, as if Robots were built to serve humans but also integrate with them in society- like iRobot (2004) or Bicentennial Man (1999).
“Andrew came in one day with the inspiration for WALL•E’s eyes. He had been to a baseball game and was using a pair of binoculars. He suddenly became aware that if he tilted them slightly, you got a very different look and feeling out of them. That became one of the key design elements for the main character.”
I liked the idea of using everyday items for inspiration- to give the character the feel he does not need added extras, but what he is built from gives the personality.
One of the big points of discussion in creating the character of WALL•E was whether or not he should have elbows. “Early in the film, we had designed WALL•E with elbows,” explains supervising animator Steve Hunter. “This gave him the ability to bend his arms. As animators, we were fighting for it thinking he’s got to be able to touch his face, hang off a spaceship, and have a wide range of motion. But when you really looked at it, it didn’t feel right. He’s designed to do a task, which is to pull trash into his belly. Why would he have elbows? It didn’t make any sense. So with Andrew’s help and an inspired idea by directing animator Angus MacLane, we gave him a track around his side which allowed him to position his arms differently and give him a range of motion. It helped us flesh out the character a lot more. Something like elbows may seem kind of trivial but the way we solved the problem makes you believe in WALL•E more because we didn’t take the easy way out.”
Elimination of elbows was something I found intriguing and something barely noticeable in the film until you go back and actually look at Wall.E. I 100% agree with the design decision as the lack of elbows works much better.
This interview was with Jay Shuster, a designer behind Pixar films including Cars (2006) and Wall.E (2008), having a background in mechanical engineering, he gave this interesting interview discussing streamlining it with overall character design.
John [Lasseter] was really, really into maintaining the authenticity, the honesty of the materials in the design of those characters.
He’s right there at the edge. We maintained a size but kept him cute — he couldn’t just be a gigantic earth-moving machine. We wanted to work with a certain-size package to keep his character. We did have to cheat a bit, allowing for his head, the arms, the treads to fold into his body. Everything does kind of collide inside, but we worked really hard to get to a point where the animators could run with it and make him look like he really works.
Explaining Designs in detail (Cars)
“This is a concave area,” “you need to smooth out the transition between the hood and the fender” and so on — stuff that was inborn in me growing up in Detroit, kind of knowing what a car looks like and how it was manufactured. That was a very gratifying part of actually working here, first on Cars and then on WALL•E, was finding that knowledge again and being able to use it to make these characters as convincingly real — and as honest — as possible.