Sep 2010
1 / 15
Sep 2010
Oct 2010

Hi, I want to importing mocap data onto a face rig (standard bones on motion path rig). The method I am using is to capture the data, clean it up and then import the cleaned up marker positions into maya. I then want to connect the clusters controling the curves directly to the marker positions.
Now the problem: The head movement is also part of the motion capture and I want to invers this movement so that the head is still to do this I must compute the head movement. I have 6 markers that are on positions on the head not affected by facial expresions so how can I use thes six points to derive the head rotation and translation. My solution at the moment is only using 3 of the 6 points.
This is what I am doing at the moment: I create a 2 joint chain and give it a IK setup with a pv. I parent the root joint to one of the markers the Ikhandle to onother one and the PV to another marker. I can now use the the rotiation ant translation values of the root joint to inverse the movement, but yeah I am only using half of the data I have to do this with. 
And I can mel script if that would have to be the solution.

All constraints can take as many inputs as you like*, what happens is maya then just exposes you weights for each one, if they are equal then the average is what you get.

So I dont really see any problem here, other than you probably want to use aim, and transform instead of orient but hey....

*just select more than one thing whan you choose the menuitem, or put more items in the list when using mel. Maya rigging 101 actually.

Thanks for the reply but I was talking about the multiple AIM constraint and that does not give me a result I can use. Is their a way that I would be able to use orient constraint? Remenber I only have the translation values of the markers to work with.

No you weren't you don't address the issue anywhere. I was just left to assume you meant that.

does not give me a result I can use

Why not? Circle definition i can not do this because i can not use the data. You need to explain what exactly goes wrong. Remember i don't see you screen or read your mind (whenever it seems i do its just because ive had this discussion before.)

I cant really see any reason why not, assuming you know how to set the up constraint value. If the perosn is very wildly moving say doing a somersault of some kind you can use one of the points as up.

ok so you might not get a truly coherent dataset, but thats a recording problem. But i bet you would get something very close.

I create a 2 joint chain and give it a IK setup with a pv

why would you use a IK? I mean IK is a aim.

Ok I will try harder. This is not for full body mocap it is only facial so their is not a lot of head movement.

So the software captures the data and I can output it as c3d or fbx. I am using the fbx export and then importing it into maya. This gives me the positions of the markers moving in 3d space as locators.

The rig I want to attatch this to has manipulation points on the same spots as where I placed the markers for the motion capture. So now I want to connect the face rig manipulators to the locators to get it doing the same as the mocap.

Now the problem is the markers are exported with the head rotation and translation. This means that if the actor keeps his expresion the same but his head moves a bit all the facial markers will move and all my facial rig manipulators will move and make the animated chars expresion change when it was actualy just a head rotation. So even a small movement will cause big problems.

So now what I want to do is compute the head translation and rotation. I do have six markers on the head that are not effected by the actors facial expresions so these markers move together as a group. I can get the head translation by using a multiple point constraint on these head markers but the multiple aim constraint is not acurate enougth to compute the rotation.

The mocap proses is not presice and this is why I can not just use 3 markers out of the 6, may be the 3 I chose are the ones that are captured the worst at that time... I need a avarage.  So these head markers will always be very close to the same position relative to each other but they are moved and rotated through space. So if I duplicated these six markers without keys on frame 1 and grouped them I would be able to rotate and move this group so that the locators match up to the original markers on a differant frame. I want to know how I can compute the rotation values of this group to make it match as best posable. I can the reverse this movement and aply it to all the markers to get rid of the head movement

May be something in maya live?
I dont know how to do this but if their is a way I could upload a scene file to ecplain it a bit better

Yes you explained all that*. however you didnt explain why the aim does not work for multiple points. I gave you clarification that this should work and you told it does not work. my question is WHY NOT, this is the problem.

My biggest problem is that i can as a general rule not assume people are competent, even if i want to. See in practice i find that competency is actually quite rare. Why? Well the point is a bit colluded, a competent person doesnt need toa ske the sort fo questions. So that leaves 2 alternatives:

  1. you are competent but missed one very early step of the puzzle*
  2. your working outside your comfort zone

Now if i make 6 or indeed 120 points and ALL points and choose aim aim object i get a aiming diretion that points in the avarage direction of those points. And this is what you want to get. Granted the data is slightly offset form your main but thats not really the datas fault for that you put in a orient constraint and manipulate ist offset.

Perhaps your experiencing flipping. Well tahts just a failure to pick right up vector, and quadrant information, Since your going to bake the data down anyway you can use a euler filter to take this out.

May be something in maya live?

No thats just tracking cleaning the data is still a job for your rig,

*yes ive done this before.

In practice you need a point for the forehead which is locked to the skull. Otherwise you get fluctuation

PS if something doesnt rotate then orient does nothing for you.

PPS: i find that aim in practice is understood by only a very very tiny minority of maya users. because they assume they understand what aiming is.

  • in my first job i debugged a computer that didnt work turns out i had connected mouse and keybord wrong way around. DOH!

Ok my problem is with the world up vector, but seems like I am gona have to be insulted 500 times before I get a sulotion. I solved it with IK because I am more competant in using that. I would however love to send you the scene and see if you can get  a decent result with orient constraints. So that I can understand the world up vector thing a bit better. I got posted this question on the site looking for help and I was not thinking I would be insulted for not knowing something.

May be something in maya live?

No thats just tracking cleaning the data is still a job for your rig

That is exactly what I want to do I want to track the head movement using the six markers, they are not exactly locked to the skull cause we did not want to kill the actor but they do not move a lot at all. the fluctuation is more a result of the mocap camera resolution not being that high.

insulted 500 times before I get a sulotion
not thinking I would be insulted for not knowing something.

Sorry not my meaning. I am disussing WHY the communication is tainted. Being tolerant to somebody saying your wrong is a prerequisite to becoming better. On the other hand you INSISTED that you knew something you did not so I have to devolve the discussion back because i have to unravel the things you take for granted and replace them with new things. So the only way I could ever do that is by convincing you dont know something. And for that i must explain the realities of communication**. Theres nothing wrong in not knowing, but insisting with contrary evidence IS.

See aim and ik work exactly the same. (because they are using the same math module for it, aim just eliminates the solution recursion for additional bones, and finding the 2 bouble plane intersections for each)

2 vectors form a basis. one up and one along. Along is the local space vector you want turned against the object to aim for. Up determines how to orient the object around the axis. Up is basically same thing as a pole vector,a and along is the vector direction

Flip happens for exactly the same reason as maya bones flip. So if you can get a solution that does not flip with bones you can get a solution that doesent flip with aim. The calculation is the same for the first segment.

basically when you dont account for the along it is

(vector form start location to target) cross (up) = side vector guess.

side vector cross vector form start location to target = new up.

along is used to offset teh solution form the indentity of the along, side up matrix.

Now for this reason once the direction passe the plane of sloution objects flip. just like a bone would passing the pole vector.

So to solve this place the aiming point below the solutions. Or choose a up that aims away from the points and the offset the sulution later.

Alternatively, you can just use 6 several uses use ik or aim to get the individual values with own local poles and sum the results with one orient constraint. You should only do this if the sulution is practically tricky, thisway you can bisect the data to find a easier up vector for bisected data.

That is exactly what I want to do I want to track the head movement using the six markers,

So you havent tracked the data yet? So you dont have a motion capture then?

 I mean if you HAVE 6 markers already that yoru rig tracks then that means you already HAVE tracked the data. See tracking here refeers to the act of turning teh points into 3d data. Tracking IS the mocap and the term is reserved totally for mocap. You call it constraining if you do tracking in maya because there's no tracking involved maya knows where things are.* So theres no point in trying to guess where things are going if you by the time they are there get the exact data.

Live, which is by the way deprecated, only takes 2d stuff and translates them into points in 3d. The entire discussion devolves to nothing if you dont allready have the 3d points captured. So you allready have done the image tracking, otherwise youd send me the video to track.

fluctuation is more a result of the mocap camera resolution not being that high.

there you go so you have done the tracking.

they are not exactly locked to the skull cause we did not want to kill the actor but they do not move a lot at all.

lol no thats fine, thet then means they are locked.

Ok ill friend you be sure to send a maya ascii file.

  • tracking refeers to the phase where the computer follows the dots in the picture, because it needs to track them down just like huntsman follows the trail of a deer.

** This is how i work.

First i assume you know the solution just missed it because you were storming ahead, this catches 60% of the problems. Then i ask for clarification, which by the way you havent given me. The only thing you gave me is does not work, no explanation, unfortunately that gives me nothing to work on. Then im in problems as i now know what you DONT know but i am unable t communicate with you unless you help me

Next step is to become proactive and assume you dont know anything. Why? Well see i KNOW you know something. I just have no idea what it is so i have to test your skills. At this point MOST people give up. Why because it sounds that i assume your a idiot, i dont, thats the next step. Ive just used up all my guesses on what your problem might be. This step very quickly devolves into squabble, because i have to ask or explain pretty trivial stuff. But more importantly ive lost trust in what you say, so i have to consider that the message itself is broken.

I would like to add a thing that came to me yesterday,

There is a mathematical method called the least squares method. That could solve both the rotation, position in one go (well actually it could solve shear, and scale at the same time too). So you could look into that however youd really like it to be a node. Im currently looking into how to force the method to discard shear and rotation without expanding the central matrix, or relying on iterative methodologies.

This would probably be better in terms of accuracy as it would as a side attack the noisiness of your data. But as asoulytion slightly more involved.

Thank you, you gave me a lot of info to work through I will do that and hope I am able to solve it. I do realize that I did have issues with the world up vector. I was just getting fustrated cause I can not find a solution and I know I was incompetant but no one understands maya 100% and that is why I come here to ask questions.
Any way thanks for the help and I hope we understand each other better now. I will let you know when I get it working. Have some other work to get done first: A script that changes a model to look like lego so that we can make "fake" stop motion.

Well i am currently testing the least squares fitting, wich is extremely accurate and pretty fast to process. Expanding it to ignore shear prooves to be a bit hard tough (tough the data should be inclined to have very minimal shear if any). However elementary fitting is as follows.

T = (A^T*A)^-1*(A^T*B)

Where A is a 4x6 matrix with each vector in a row and last column 1 (they are points in space) and B the corresponding measured points you want th points in A fitted to. This results in a Tranformation matrix T which you can decompose with the decompose matrix node that you can load up in your transform node.

PS sorry the board doesnet read formulas all that well.

16 days later

hi christianmassyn,

I might be of help, I hope it isn't too late cuz I just saw this thread. I'm trying to understand your problem. I'm imagining you've got a scene with a bunch of points (probably locators) moving in your scene. Six of those locators are there to "supposedly" describe the movement of your "actor's" head. But the movement of the head isn't really as important as the movement as the markers for the face, right? So if I understand you correctly, your goal is to subtract the movement (and rotation and maybe) of the head from the track points for the other facial markers.

If my understanding of your problem is wrong, please disregard my solution. (NOTE: It might seem like I'm talkin down on you cuz I'll over explain steps, but I'm not. I find it weird actually. You're already scripting and most of what I'll say might not need explanation. But I'll overexplain things none the less just to avoid miscommunications.) Otherwise, cross your fingers and hope this works:

The first thing you need to do, just as you've originally intended, is to get the head's movement and rotation. I think it'll be better to separate position and orientation. To get the position, we'll just have to go for the average of those six points. This means: 1) creating a a locator, null, sphere or any transform node of your choice 2) select all six locators and then the object we just created 3) point constraint. Turn off maintain offset.

Scrub through your animation. The object we created should now be dead center to the head markers. What it doesn't have, currently, is rotation. Let's name that object "headPosition" so we don't lose it.

Next we'll have to do is get the rotation of the head. This is when aim constraints are gonna be important. An aim constraint allows you to make sure an object is looking at a particular object at all times. But looking at objects isn't everything. Imagine this scenario. Make a gun out of your hand so that your index and thumb fingers are sticking out and the rest tucked into your palm. Then point at something, anything. Try the start menu button, your mouse, a bottled water's cap. Notice that you could turn your hand so that your thumb is no longer pointing at the ceiling (or the meridian) but instead looking or pointing at the horizon? This is when up vectors and world up vectors come into play. You define an up vector so that Maya will make your index finger point at a target you define, and at the same time, you could also define which direction your thumb point to.

Just some definitions:
1) aim vector is the vector (in the object's local space) that defines the direction you want to aim.
2) up vector is the vector (again in the object's local space) that defines which part of the object is it's top part. this works a lot like the "This side UP" stickers on packages or crates.
3)  world up vector is a vector that defines which is up in your scene, if you are using "vector" as your world up type.

Now, if we are to use world up type "vector", we'll be supplying a vector that'll define the up in our scene. Think about this. This is'll mean a constant world up vector. This won't work, in your case (if my imaginations are correct). What you need to do is set up an object that'll serve as your world up and use "Object up" as your world up type.

How do we do this? Well, we work with the data that we have. We'll either create two new locator/null/object-transform node that is constrianed to be in a position that'll serve as the "new" UP and the new "target". We do this in the same way we did the "headPosition" only this time, we won't use all six locators. We'll use maybe two reliable markers. You'd also want to pick markers (from the six) to constrain your new object so that your new object falls as far from the "headPosition" as possible. Hmmm.. if this part is unclear, consider the following: if you chose all six markers again for this, your new obj will end up in the same exact position as "headPostion". If you chose 5 markers only, you'll be slightly farther from "headPosition." In order for aimconstraints to work you'll want to choose "targetsObjects" and "upObjects" as far as possible. Then name them, of course.

Now to set up the aim constraint, open the options box for your aim constraint, and choose "Object up" for your "world up type" and then enter the name of your "upObject" in world up object. Then select your "targetObject" and then your "headPosition" and hit Add or Apply. Now, notice this is assuming you don't really care about offsets and such, cuz if so you'll have to put them in here.

So now, your headPosition object should be moving and rotating somewhat correctly.

If you are unsatisfied with the movement, the reason would be your faulty mocap data. You may want to try setting up the point constraint with less markers.

If you are unsatisfied with your orientation, you'll have to resetup your "upObject" and your "targetObject". Again farther is better, though you can only go as far.

Now, for the last part. How do we get the position of your facial markers in relation to your new "headPosition"? This part is pretty simple, though it'll be tedious. And because you can script, we won't have much of a problem.

First, you'll have to make duplicates of each facial marker and parent these duplicates to the "headPositon" object. Then you'll pointConstraint each duplicate to their original copies. After which you'll yet again create another duplicate of each locator (and probably store them in another group). Then, lastly you'll connect the translation attributes of the first duplicate to the last duplicate in the group. The last duplicate in the group has nothing but movement in relation to the head.

note: instead of duplicate (which is unnecessary) you can simply create another transform node

in script, if I may:

/// start here ////
// run this script with all your facial markers selected
$headPosition = "headPosition"; // this should be a string containing the name of the object you used for "headPosition"

$grp = createNode transform -n "groupOfTransformNodesWithRelativeMovement";
for ($each in
ls -sl) {
      $constrained =
createNode transform -n ($each+"_constrained");
      pointConstraint $each $constrained;
      $relativeTransformNode = `createNode transform -n ($each+"_relative");
      parent $relativeTransformNode $grp;
      connectAttr ($constrained+".t") ($relativeTransformNode+".t");
      toggle -localAxis; // this part is optional but it lets you see your new
                              // transform nodes moving if you use nulls or transform nodes with no shapes
}

/// end ///

If you dont want to use the script, another option you might want is to simply select all the transform nodes that are pointConstrained and are parented under your "headPosition". Then bake simulation (Edit->Keys->Bake Simulation). Just keep things at default and hit Bake.

This will generate keyframe animation for every movement your markers are doing, sort of like a recording. You can now delete the pointconstraints.

if there's too much, you can easily type:

select -r "headPosition";
select -hi;
delete ls -sl -et "pointConstraint";

Then, lastly, reparent all those transform nodes under a new group that isn't moving and prefferably under world.

Whew, mouthful! Hope it works!

Keep us updated!

David

Yes quite good explanation. Tough the least squares method is undobtedly better since even IF the data is corrupted it will try to minimize the corruption with less steps involved. (but if you know whats corrupted then theats another deal)

That'll be awesome Joojaa. Wish I knew mathematic notation, so I could translate them into code.

Maya knows most of the operations so you jsut prettymuch type it in as such. Exept you must do the squaring yourself because maya doent know how to deal with higher than 4*4 matrix multiplications properly in the api. Which isnt a big deal you can fid code to do it brute force allost everywhere for free.