Game Development Community

The correct way to design models if expecting to use facial animation?

by Kory Imaginism · in Game Design and Creative Issues · 03/19/2012 (6:05 am) · 6 replies

I've been around the garagegames forum for a while now. As long as I've been here I would say most of the info I received has been helpful. Although there is some info that is A.) Never covered or B.) Others just don't ask. I would like to have solid tutorials or even information on the engine like some of the other engines have. I think this is one of the main area that should be focused on, but the documentation has come a long way. Anyways I'm in the process of design models for my projects and I would like to take advantage of using facial animation (if possible) in T3D. How would I design the model so T3D handles it? Cryengine wants to artist to have the head separated from the body, to set up the character for facial animation. Unreal I believe you can keep the model as a solid mesh but not totally for sure. The source engine even handles facial animation a certain way. In T3D is it blend animation or should we look into a different method? I would just like to know which method to use or keep in mind as I'm in the design phase of designing my models?


Thanks

#1
03/19/2012 (5:14 pm)
Well, I have done some experimenting with morph targets, phonemes and the blended animation capabilities that T3D has. Results are mixed but I think it has more to do with my lack of facial animation skill than with the concept. I model the head attached to the body. Then I go through and create a number of morph animations that cover basic phonemes. Simple definition: when a human makes the sound "ohh" the face is formed almost universally the same in order to produce that sound. You can find more info on the basic phonemes if you do a web search. I place these in a thread then I take a short audio clip and more or less time the thread to match the audio phonemes. Time consuming but like I said it does work.

A simpler version of the same theory would be to prerecord your audio clips then create a string of morph animations to match. Place these 'clips' in a thread and sync the whole thing up with your audio clip as it plays back. It works, but again it is time consuming key framing the morphs from the artist side. Hope that helps a little. Oh yeah, don't forget that the more detail in the head the better the morphs work. Good LOD work can help lessen the impact on characters that are not 'right there'.
#2
03/19/2012 (5:26 pm)
Thanks Ron, so it sound like the blend animation with morph targets would be the best way in T3D for facial animations. At least I have a good direction to go in. Again thanks! Can't wait for the next pack.
#3
03/19/2012 (8:45 pm)
Morph target usually mean vertex animations, and iirc it was removed from T3D.
Ron can your verify and/or clarify ?
#4
03/20/2012 (6:17 am)
Eb, so they removed vertex animations? I never really looked into it. I did get the sample model from rocketbox-libraries.com/us/index.php/ the character is fully rigged with one long blended animation. Within that animation the character makes a number of facial animation blended within it. I converted the model to DTS and the animation still work. My theory is that if I were to follow the same method as rocketbox then I would solve the facial animation part but what about on how to split the animations within the editor. I heard you can use triggers to activate them or by splitting them. I would just like to come up with the best method that works with T3D, so we'll be that much closer to a unified process for creating characters for T3D like cryengine, UDK, and source.
#5
03/20/2012 (7:59 pm)
That is not easily explained in depth and may require custom code for your ideal 'workitude'. :P

Blends are probably your best friend here, animation 'weight/precedence' numbers 0 thru 9 iirc should help out. iirc, the showtoolpro manual did a decent job of explaining the concepts from TGE regarding blending and animation precedence..but it's been 6 years since I've read that so don't hold me to it! :D
#6
03/21/2012 (7:21 am)
Blends would work, as long as you have enough nodes[bones/joints/helpers/whatEVVAs] and start from a 'root'/neutral position/pose.

Coding those sequences to the sounds as Ron suggests is probably also a good way to go.

Vertex animation was supposedly deprecated with T3D?? So, morphs ain't gonna work out of box.....morphs are okay, then you need the extra resource binaries with the new shapes[unless you get to load those, ON THE FLY, is that how you've done it, Ron???]...welcome load time or performance lag, each morph target would need to reside inside the memoryConstruct of the DTS/DAE shape making for a much larger file size. Cut Scene? perhaps the way to go....limited viewing scope and tailor made for the viewer.

I think the place to begin is to really map out EXACTLY what you want to see or experience. Nearly anything is possible with the sourceCode, question is.....is it reasonable to ask the engine to perform this?!? And to what real 'gain' to the players' experience.

...okay I gave 3 pennies worth on morphs in T3D...I'm done. LOL