FACIAL RIGGING TECHNIQUES



Hippydrome has finally added the face!! What more can I say?

Facial Rigging: Balancing Quality and Control (Image Metrics' Jay Grenier)

These are awesome lectures- see especially Part 2, 00: 46: "If your facial rig can't do this, then it sucks!"

Explores the Balance of making a 'perfect' facial rig with limited control that animators cannot mess up, and an anatomically accurate facial rig with 1000 controls that can be 'broken' by an inexperienced animator.

The IM system uses a 'hybrid' blendshape/joint setup where the joints follow the blendshapes using custom constraints. How is this done in a way to avoid cycle errors? Jay suggests checking the Image Metrics forums..

Another big point is: don't neglect the upper face in your rig. The eyes (and brows, eyelids and upper cheeks) are the window to the soul, as it were, and if you spend a lot of time on the mouth area and not enough on the upper face, people will focus too much on that area and you risk falling into uncanny valley.

More tips from Jay:

Make sure that your face can do narrow and wide properly

Add an attribute (specific blendshapes) for brow squeeze

At lease three ctrls for the lip. (in addition to corners). As a bonus, add children joints to thin & thicken the lip

If you try setting up a facial rig exclusively from Maya Muscle (and FACS system), your animators will Kill you, because rig is so slow. Muscle rigs are 'sweet' and 'impressive', but if you need speed, no muscle.... In fact, IM made some of the first muscle-based facial rigs to break the uncanny valley, but in normal production they are still not feasible.. yet!

You still have to sculpt your facial blendshapes to make them look great, but the underlying structure of nodes and connections is all done automatically - huge time saver.

Note on blendshapes: don't stop where it's anatomically done--animators will need more, oftentimes. Push it until it looks wacky, but is not breaking the mesh.

Note on curve-based rigs: I think they're pretty awesome, but game engines don't support them. Kind of a middle ground between joint-based and blendshape/hybrid..

'band-aid' corrective shapes can be easily automated with MultiplyDivide nodes (ie., shapes used on Gollum, Avatar etc. to correct blendshape-combo issues..)

Eyes: make sure you have 'soft eye' feature--ie., lids follow eyes.

Sticky lips: to make it look really nice, we make blendshapes--but--we have a 2nd layer of fine control if u need it.

Q. what is ideal # of joints for a facial rig?

A. 64. 62 face, 1 head, 1 neck. If you go above, it can get better.

Animators hate rigs that have a bunch of joints around the eyelids (ie., Face Machine). Too hard to get just simple blink.

There IS no 'perfect' facial rig. Take it from me, at Image Metrics!

That said--best setup has both custom attributes (like blendshapes) AND fine level of control.

sometimes you have limitations, such as game engines that can only take 70 channels.

Q. How long does it take to build a high-quality facial rig like the one in this demo?

A. 1-2 months.

And check out the FORUMS

also, the TRAINING RIGS

Faceware Webinar series - Facial Rigging Techniques with Josh Burton

Soft Face

(link from Brad Clark)

softMod Manipulator

Ryan Griffin's Great Face Rigging Tutorial

Part 1

Fast and Efficient Facial Rigging (Jeremy Ernst)

Osmus Clayton WIP

Peter 'Morphy' Renders (nice morph deformations from 'Staged' short film) (thanks to Atsushi Kojima for the link)

Interview with Liam Kemp (3d Advanced Rigging)

Fast and Efficient Facial Rigging (Jeremy Ernst, GDC 2011)

Some excerpts:

To deal with outsourcers: packaged all the autorig scripts into an executable so they wouldn't get lost/cause trouble. Used Smart Install (www.sminstall.com)

In Gears 3, we have a game mesh

and a cine mesh. They are the same mesh, but are just weighted differently to allow for applying the same animation on either. This way, we

can still be smart about how many weighted bones we have in-game, but when we need the extra deformation for close-up cinematics, we can

load in those weights.

recommend reading:

Metal Gear Solid 4 article (version with images here)

CG Society - Curious Case of Benjamin Button

Facial Rig is made up of four main layers:

1. low-res poly cage to make morph targets with - all animatable attributes are derived from morph targets made w/cage

2. locators pinned to the surface of cage, that move w/the morph targets

3. 'offset rig' - controls that move with these locators and then allow offset from the pose

4. joints that actual model is weighted to--these are constrained to offset controls

FACS-based morph targets

So, how DO you pin locators/controls to the deformed mesh so that they move with the morph targets?

Hi guys.

I'm rigging this characters face, and so far it's working alright.

I use blendshapes for main manipulations such as smile/frown and so forth for the mouth shapes.

I then have a bunch of joints that I can use to tweak the mouth shape.

The problem is though, that my controls don't follow the new shape of the face that the blendshape creates.

I can still use my joints to influence like I could before I used the blendshape, but the control is no longer sitting on top of the place where I want it to.

http://www.doffer.dk/deep/mouth_rigging_issue.jpg

Like you can tell, the joint still controls the part it did before the blendshape kicks in, I just want the control to tag along.

I have 2011 so I have access to Point on Poly, but I'm not quite sure how to tackle this problem.

What's the thinking here?

(doffer)

I'm not sure how well the method that I found works BUT what I prefer to do is create your facial joints, orient them to the face as you will, and then freeze the joint's transformations. This assumes that you're using floating joints or a "broken" joint rig.

Create a group exactly where the joint exists and then use something like rivet MEL to constrain a group node to the mesh. Create a controller of some sort, I prefer small NURBs circles for the face as opposed to curves, snap the control over your group node, and parent the control to the group. Freeze the control's transformations.

Finally, connect your translate, rotate, and scale attributes from your controls to your joints. You may be thinking that because the joints aren't on the mesh exactly that they won't operate the same but they actually do. I've tested it and on a theoretical basis, since joints work with paint weights, it doesn't matter where they are.

The end result is that you have controls that can warp the face and the controls, because they are parented to the group node (which is constrained to the face via rivet) will follow the facial deformations of the blendshape exactly. A bit indirect but it avoids the cycle that occurs when you have riveted joints, which deform the face, moving with blendshapes, which also deform the face. Hope that procedure helps you (if you still need it) and anyone else that comes by this thread.

(Korinkite)

Facial Capture

Ability to edit facial performance capture from http://research.animationsinstitut.de

Eye Rigging

Rigging Beautiful Eyes (Marco Giordano via Rigging Dojo!)

Great tutorial. Lots more resources here!

http://www.marcogiordanotd.com/

does you figure how we can adapt your methodology to characters with non-spherical eyes?

@Gerardo

Regarding the not spherical eye you will have instead to aim the bone to make the bone slide on a surface

(M Giordano)

How I Did It:

1. Create nurbs surface and scale to dimensions of your eyeball

2. Create cpos (closestPointOnSurface) node

3. Connect nurbsSrf.worldSpace[0] -->cpos.inputSurface

4. Connect AimLoc.translate --> cpos.inPosition

5. Create helper Loc called 'helperLoc' or something, put at at joint position.

6. connectAttr cpos.position --> helperLoc.translate

7. pointConstrain joint to helperLoc

8. Enjoy!

Further issue--what if eyelid doesn't close completely?

I made SDKS with the point constraint offset (since for non-spherical eye there's a point constraint controlling the joint, in a spherical eye you could use the vals directly). There was also a more elegant solution, but I can't remember it!

WARNING: when you rotate this setup 180 deg the wires don't behave properly. How to fix this????? Scaling to 0 on the wire works, but is a pretty messy solution.

Nyeng's Stretchy Eye

Lip Rigging

Zipper Lip

Chad Vernon Sticky Lips

Ribbon Lips

Advantage- fast setup, relatively easy

Disadvantage - As I found the hard way, the ribbon joints seem to need to be in your main skinned geo, not piped in through a separate blendshape node, or else you get double transforms. Would like to find a solution for this since it's limiting to have all your face joints on the main mesh, or at least in my humble opinion-makes editing so much more difficult.

Corrective Displacement

corrective displacement (John Patrick)

Face Rigging With Textures

One way to do this is with projection textures.

BUT.. the projection textures keep sliding. How to fix this?

if you graph input output connections of the shape node of your mesh, you'll notice the last thing in the history of the shape node is a grayed out node with the name of your shape node but Orig added at the end. You copy your mesh then delete the uv history and then plug the copy mesh where the Orig mesh is. Just hook the copy mesh into the place where the orig is hooked via connection editor. To delete the uv history delete the nodes that appear in the history in the channel box editor. So take cutUV or whatever other uv history you have write it's name in the sel query, press enter then delete.

what this does it it feeds the mesh with the right uvs through the entire history stack, and gets rid of any swimming.

probably a bit complicated but it's a nice way to not have to worry about when you do uvs.

(lovisx)

this didn't work for my texture-slide issue, but it might work for yours.

I simply needed to parent textures to main joint.

Scaling UVs

SDKs on repeatUV and offsetUV. Doh!

Yep, some sdk's are probably the sensible way to go. But not as much fun as a bit of SOuP magick.

With SOuP you get "mapToMesh" and "meshToMap" nodes, so instead of trying to animate the place2Dtexture stuff you can animate the uv's by first converting them to verts. It works as follows...

With SOuP you are connecting outMesh (or worldMesh[0]) to inMesh from node to node, so you'll end up with several copies of your mesh, but you can hide all but the last. Anyway...

pCubeShape1.worldMesh[0] >> mapToMesh1.inMesh

mapToMesh1.outMesh >> pCubeShape2.inMesh

pCubeShape1.worldMesh[0] >> meshToMap.inMesh

pCubeShape2.worldMesh[0] >> meshToMap.inMesh2

meshToMap.outMesh >> pCubeShape3.inMesh

Where pCube1 is your original object, pCube2 is that objects uv's converted into a mesh that you can scale and deform etc. And pCube3 is the resulting mesh, which is the same as pCube1 with uv's converted back from pCube2.

(djx, cgtalk)