DIY mocap idea
by Daniel Buckmaster · in Artist Corner · 04/08/2009 (12:40 pm) · 6 replies
So how about this. You shoot a motion with as many cameras as you can manage. With or without markers, though markers would probably make things easier. You sync the capture somehow, for example by using a camera flash at the start of the recording.
You recreate the arrangement of the cameras in a 3D program - preferably one custom-made to do the next step. But whatever, setting up the 3D scene just like the real scene is important.
This 3D program then tracks each marker it can see during each frame. If markers are well-defined, or you can manually set marker locations (that's how you'd get markerless capture - doing it all manually :P), this would be less of a problem area. (It'd also be nice to be able to manually specify marker IDs, to make the next step easier.) The 3D program then uses the apparent marker position from a camera to draw a line in space. Each frame, all the cameras throw out their lines to markers they can see. To determine the position of a marker, you just find the closest point between all the lines that are supposed to be passing through it. This could be done by simply finding the closest point between each line and each other line (so for 9 cameras, 9 choosing 2 is 36 points) and averaging those points.
If you can automate this as much as possible, and add in some mechanisms for throwing out erroneous points, wouldn't it be fairly simple to take all the pointdata per frame and make a BVH?
Or have I just described $50,000 worth of work? :P
The reason I ask is that I suck at animating, and I like mocap because it gives actual natural motion. I'm thinking of ways to get mocap data cheaply - getting cameras isn't *too* much of an issue (there's always borrowing), but dealing with the video is. I'd wonder if the procedure I've outlined above isn't too much for me to attempt programming on my own. It doesn't *seem* complicated in principle - but I'm sure that the complexity would be in the process of tracking markers, eliminating errors, and providing a UI :P.
You recreate the arrangement of the cameras in a 3D program - preferably one custom-made to do the next step. But whatever, setting up the 3D scene just like the real scene is important.
This 3D program then tracks each marker it can see during each frame. If markers are well-defined, or you can manually set marker locations (that's how you'd get markerless capture - doing it all manually :P), this would be less of a problem area. (It'd also be nice to be able to manually specify marker IDs, to make the next step easier.) The 3D program then uses the apparent marker position from a camera to draw a line in space. Each frame, all the cameras throw out their lines to markers they can see. To determine the position of a marker, you just find the closest point between all the lines that are supposed to be passing through it. This could be done by simply finding the closest point between each line and each other line (so for 9 cameras, 9 choosing 2 is 36 points) and averaging those points.
If you can automate this as much as possible, and add in some mechanisms for throwing out erroneous points, wouldn't it be fairly simple to take all the pointdata per frame and make a BVH?
Or have I just described $50,000 worth of work? :P
The reason I ask is that I suck at animating, and I like mocap because it gives actual natural motion. I'm thinking of ways to get mocap data cheaply - getting cameras isn't *too* much of an issue (there's always borrowing), but dealing with the video is. I'd wonder if the procedure I've outlined above isn't too much for me to attempt programming on my own. It doesn't *seem* complicated in principle - but I'm sure that the complexity would be in the process of tracking markers, eliminating errors, and providing a UI :P.
About the author
Studying mechatronic engineering and computer science at the University of Sydney. Game development is probably my most time-consuming hobby!
#2
[I think Blender supports movies as background images, no?]
But hey, I can dream...
04/08/2009 (1:16 pm)
Yeah, that was another option my searching brought up - film maybe a front and side angle (top would be nice as well), and use the film the same way as reference images when you model. This is probably what I'll end up doing ;P.[I think Blender supports movies as background images, no?]
But hey, I can dream...
#3
You'd have to use loads of high quality, high speed, cameras.
Custom made software, write an exporter for said custom software, and so on.
Is it doable? Yes
Is it easy? No
Would it rock to have such a setup? Without a doubt :D
(But then again, I have other ideas for motion tracking in the works)
04/09/2009 (5:53 am)
That's quite a lot of work and expensive equipment.You'd have to use loads of high quality, high speed, cameras.
Custom made software, write an exporter for said custom software, and so on.
Is it doable? Yes
Is it easy? No
Would it rock to have such a setup? Without a doubt :D
(But then again, I have other ideas for motion tracking in the works)
#5
04/11/2009 (5:36 am)
Yeah, that was what got me thinking about mocap again ;P. But it costs money. Borrowing friends' DV cameras doesn't ;).
#6
If you do set something like that up I would love to see it! :)
04/11/2009 (7:21 am)
It's a cool idea Daniel, just some pointers: Record the angle and rotation of all cameras, use synching software for the videostreams, bring lots of patience :DIf you do set something like that up I would love to see it! :)
Associate Steve Acaster
[YorkshireRifles.com]
Stick a few bits of shiny tape on yourself at all of your joints and run around the garden looking like a fool. Convert film to images and align in Blender (or whatever you're using).
I was thinking of just doing it manually to align your armature with the pics, also I figure that you could get away with just one side view and one front view - so just 2 cameras then. They'd have to be a bit of ad-lib animation tweaking, but hey, all my animations are created by eye anyway.
Thought about it, but haven't tried it though.