Frustrum/Rendering question
by GarageGamer · in Torque Game Engine · 07/14/2005 (7:15 am) · 5 replies
OK, I have posted threads similar to this in the past, so if this is getting redundant I apologize...
I am in the final stages of completeing an edutainment game ala' Pokemon snap. We've created a virtual forest that allows the player to explore and take pictures of the different wildlife. As the pictures are taken. The creatures "captured" in the photo are added to a log book, and the score is incremented appropriately.
I have no problem taking the picture using screenshot. I even adjusted the parameters to make sure it takes a snapshot of a particular area of the screen. Adding the photo to the log book is also not a problem.
The problem I'm having is determining exactly what it is that has been "captured" in the photo.
I've tried several different approaches using projectile and regular collisions, but they all have their problems. POV collision is way too accurate, regular collisions aren't always registering, and resizing projectiles just doesn't work.
I think my best bet is using the player's view camera, and manipulating the info it's already generating. I've been working my way through the engine, and have found where the frustrum is created with the clipping planes, and at one point I found an array that I believe collected the images to be rendered. Unfortunately, now I can't remember where the hell that was.
I know the camera knows what objects are inside it's frustrum, both completely and partially. It has to. That's what it renders every time it refreshes the screen.
So my question is, how can I access this image list so I can use it for scoring in the game as well.
Any and every idea or suggestion would be greatly appreciated. I've been banging my head on the keyboard on this one for waaaaaaaaaaaaaaaaaaaay too long.
Thanks
GG
I am in the final stages of completeing an edutainment game ala' Pokemon snap. We've created a virtual forest that allows the player to explore and take pictures of the different wildlife. As the pictures are taken. The creatures "captured" in the photo are added to a log book, and the score is incremented appropriately.
I have no problem taking the picture using screenshot. I even adjusted the parameters to make sure it takes a snapshot of a particular area of the screen. Adding the photo to the log book is also not a problem.
The problem I'm having is determining exactly what it is that has been "captured" in the photo.
I've tried several different approaches using projectile and regular collisions, but they all have their problems. POV collision is way too accurate, regular collisions aren't always registering, and resizing projectiles just doesn't work.
I think my best bet is using the player's view camera, and manipulating the info it's already generating. I've been working my way through the engine, and have found where the frustrum is created with the clipping planes, and at one point I found an array that I believe collected the images to be rendered. Unfortunately, now I can't remember where the hell that was.
I know the camera knows what objects are inside it's frustrum, both completely and partially. It has to. That's what it renders every time it refreshes the screen.
So my question is, how can I access this image list so I can use it for scoring in the game as well.
Any and every idea or suggestion would be greatly appreciated. I've been banging my head on the keyboard on this one for waaaaaaaaaaaaaaaaaaaay too long.
Thanks
GG
About the author
#2
1. When taking the photo, perform a ContainerRadiusSearch to check for objects of the TypeMask that your animals are using.
2. For all animals that are within the range of your ContainerRadiusSearch, check that they are within the line of sight of your player.
The AIClient enhancement resource on this site offers the following function for LOS tests, that you could adapt:
This ought to be a good starting point, if not, let me know because I do have to solve this problem myself soon. :)
- Seb.
07/14/2005 (1:57 pm)
I think that the way I'd tackle this (I haven't tried it yet, but my design calls for something similar) would be to:1. When taking the photo, perform a ContainerRadiusSearch to check for objects of the TypeMask that your animals are using.
2. For all animals that are within the range of your ContainerRadiusSearch, check that they are within the line of sight of your player.
The AIClient enhancement resource on this site offers the following function for LOS tests, that you could adapt:
// Check if the bot can visually see the target
// %previous means that it was a previous target (already seen)
// hence has more chance of detection
function AIPlayer::checkLOStoTarget(%this, %obj, %previous)
{
%data = %this.getDataBlock();
%sightRange = %data.sightMaxRange;
%eyeTrans = %this.getEyeTransform();
%eyeEnd = %obj.getEyeTransform();
%searchResult = containerRayCast(%eyeTrans, %eyeEnd, $TypeMasks::PlayerObjectType | $TypeMasks::TerrainObjectType | $TypeMasks::InteriorObjectType, %this);
%foundObject = getword(%searchResult,0);
if(%foundObject.getType() & $TypeMasks::PlayerObjectType)
{
%dist = %this.checkDisttoTarget(%obj);
%angle = %this.check2DAngletoTarget(%obj);
if((%angle > %data.sightArcRange && %angle >= 0) || (%angle < 360-%data.sightArcRange && %angle > 180))
return false;
else if(%dist <= %data.sightMinRange)
return true;
else
{
%testperc = (%dist-%data.sightMinRange)/(%data.sightMaxRange-%data.sightMinRange);
%diff = 1-%data.sightMaxAbility;
%testval = (%testperc * %diff) + %data.sightMaxAbility;
%testval = %previous ? %testval+0.30 : %testval;
if(getRandom() <= %testval)
return true;
else
return false;
}
}
else
return false;
}
// Return distance to target
function AIPlayer::checkDisttoTarget(%this, %obj)
{
%newdist = VectorDist(%obj.getPosition(), %this.getPosition());
return %newdist;
}
// Return vector to target
function AIPlayer::checkVectortoTarget(%this, %obj)
{
%newVec = VectorSub(%obj.getPosition(), %this.getPosition());
return %newVec;
}
// Return the angle of a vector in relation to world origin
function AIPlayer::getAngleofVector(%this, %vec)
{
%vector = VectorNormalize(%vec);
%vecx = getWord(%vector,0);
%vecy = getWord(%vector,1);
if(%vecx >= 0 && %vecy >= 0)
%quad = 1;
else if(%vecx >= 0 && %vecy < 0)
%quad = 2;
else if(%vecx < 0 && %vecy < 0)
%quad = 3;
else
%quad = 4;
%angle = mATan(%vecy/%vecx, -1);
%degangle = mRadToDeg(%angle);
switch(%quad)
{
case 1:
%angle = %degangle-90;
case 2:
%angle = %degangle+270;
case 3:
%angle = %degangle+90;
case 4:
%angle = %degangle+450;
}
return %degangle;
}
// Return angle from bots eye vector to target
function AIPlayer::check2DAngletoTarget(%this, %obj)
{
%eyeVec = VectorNormalize(%this.getEyeVector());
%eyeangle = %this.getAngleofVector(%eyeVec);
%posVec = VectorSub(%obj.getPosition(), %this.getPosition());
%posangle = %this.getAngleofVector(%posVec);
%angle = %posangle - %eyeAngle;
%angle = %angle ? %angle : %angle * -1;
return %angle;
}This ought to be a good starting point, if not, let me know because I do have to solve this problem myself soon. :)
- Seb.
#3
I've tried both radius search, which returns way too many wrong "captures", and LOS (like with weapons) which shoots like a laser and doesn't hit enough...
Your idea seems to expand the LOS a bit, and using it as a delimiter is a good idea. I'll look into it and let you know.
Thanks, and please keep the ideas coming...
When I have it working, I'll let you know how I did it.
07/14/2005 (4:30 pm)
Sebastian-I've tried both radius search, which returns way too many wrong "captures", and LOS (like with weapons) which shoots like a laser and doesn't hit enough...
Your idea seems to expand the LOS a bit, and using it as a delimiter is a good idea. I'll look into it and let you know.
Thanks, and please keep the ideas coming...
When I have it working, I'll let you know how I did it.
#4
perhaps you could do a seperate render with each object a particular unlighted color
(similar to rendering to a stencil buffer)
and then count the pixels visible per object.
you'd probably want to render to a relatively small viewport.
for example perhaps there's a small teddy-bear hiding behind a large uh, rock.
both are in the viewing frustrum, but teddy isn't in the image.
07/14/2005 (4:39 pm)
If you want a really accurate notion of whether it's visible in the image or not,perhaps you could do a seperate render with each object a particular unlighted color
(similar to rendering to a stencil buffer)
and then count the pixels visible per object.
you'd probably want to render to a relatively small viewport.
for example perhaps there's a small teddy-bear hiding behind a large uh, rock.
both are in the viewing frustrum, but teddy isn't in the image.
#5
07/20/2005 (6:20 am)
Look into how the shapename hud works. That thing knows exactly when something is on the screen and when it isn't.
Torque Owner David Barr
Viola Interactive Ltd
Since only visible objects are asked to render themselves, my first thought would be to possibly try adding something simple to the render functions to collect a list of what is rendered each frame (easier if your creatures are one particular type of object I suppose) - the only overhead then is what you add to the render function as the visibility is already determined.
One possible drawback I know with this is that visibility is a somewhat conservative estimate probably to avoid objects popping in and out of visibility at the scene edges (I noticed that when using the glenablemetrics and interior render modes I have seen that there is a margin around the visible screen that is also considered visible). Coupled with your grabbing a particular screen area and you may have something saying its visible when its not in your photo.
Having said that, I am sure there is an easier, cheaper or more accurate way of dong it. I will watch this thread as it would be useful to know the answer.