Game Development Community

Reading the G-Buffer

by Dave Wagner · in Torque 3D Professional · 08/27/2013 (2:22 pm) · 6 replies

I'm porting a sensor simulation from TGE to T3D. In TGE I would read the depth buffer and calculate the distance to bunch of points. I know with T3D with Advanced Rendering turned on it uses differed rendering, so how to I read the G-Buffer into main memory and how do I index into the depth buffer data?

If I turn Advanced Rendering off can I just read the depth buffer as in TGE or does it still use a G-Buffer?

In the TGE I would call:

glReadPixels( 0, 0, currentRes.w, currentRes.h, GL_DEPTH_COMPONENT, GL_FLOAT, mDepthBufferStorage );

to read the depth buffer and use the following function to calculate the distance to a point in the rendered scene given an azimuth and elevation relative to the center of the screen.

float GetRange(const double azimuth, const double elevation )
{
double cosAzimuth = cos(azimuth);
double sinAzimuth = sin(azimuth);
double cosElevation = cos(elevation);

// Rotate a vector by elevation and azimuth
// (0,1,0) * rotX( elevation ) * rotZ( azimuth )
//
// x = sin( azimuth ) * cos( elevation )
// y = cos( azimuth ) * cos( elevation )
// z = -sin( elevation )

Point3F pt;
pt.x = sinAzimuth * cosElevation;
pt.y = cosAzimuth * cosElevation;
pt.z = -sin(elevation);

double oneOverY = 1.0 / pt.y;

// Convert that to screen space
int xOffset = (int)((pt.x * oneOverY * halfScreenWidthDivTanHalfFOV) + halfScreenWidth);
int yOffset = (int)((pt.z * oneOverY * halfScreenHeight) + halfScreenHeight);

// These test are probably not necessary
// Cap. the X offset to a valid range
if(xOffset < 0)
xOffset = 0;
else if(xOffset >= (int)width)
xOffset = width - 1;

// Cap. the Y offset to a valid range
if(yOffset < 0)
yOffset = 0;
else if(yOffset >= (int)height)
yOffset = height - 1;

// Calculate the offset into the depth buffer for X
float * baseOffset = mDepthBufferStorage + (width * yOffset) + xOffset;

// Takes the depth buffer value and converts into a Z depth parallel to the front render plan and then multiplies
// it by the depth adjustment to get the length of the ray.
double distance = (nearTimesFarField)/(zBufferFarField - (*baseOffset)*(farMinusNearField));

// Calculate the length of the ray give the distance perpendicular to the front plan
distance *= oneOverY;

return (float)distance;
}

#1
08/27/2013 (5:56 pm)
I've never used it, but my guess is that you could use SceneContainer::castRayRendered() to project a line out and see if it hits anything. Then calculate the distance between your starting point and the point it returns.

If your sensor sim is inheriting from SceneObject, you can call:
getContainer()->castRayRendered(...)


Scott
#2
08/28/2013 (7:15 am)
Thanks Scott but I'm pretty sure that the castRayRendered returns a hit against the rendered collision surfaces and not actual geometry. I've used the castRay function call before in my ship physics to calculate the volume of the ship under the water and the center of mass for that volume. I think that I had to modify the function to collide with the water's actual surface, not the water level, in TGE.

For my LADAR sensor simulations, I need the accuracy of the actual geometry but I think we're using the geometry as our collision surface in T3D for about everything in the mission anyway so it may work.

I'm also concerned that using this function call will be too slow since I need to do a couple hundred ray casts per frame without significantly impacting the frame rate.

If anyone knows if this function actually collides with geometry in T3D and is fast enough to do a couple hundred calls per frame without dropping the frame rate too much please let me know.

I'll probably do a test myself a see what I get.
#3
08/30/2013 (4:40 am)
This is handy, you have to calculate the projected position and then transform it to view space using the inverse projection, then again to world space using the view inverse transform, which should be just the world transform of the camera, because you have ((Cw)^-1)^-1

http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/
#4
08/30/2013 (7:10 am)
@Dave
Do me a favor plz and post your findings on the performance on this.
#5
08/30/2013 (7:38 am)
Thanks Ivan. Great link! I just have to figure out how to fit the code into the T3D engine pipeline. I haven't played around with their shader pipeline yet.

So if I understand Matt's article correctly, I would use his shader to render out the normalized view-space z values and then I would read the screen buffer into main memory, just like the screen capture code. Then I would do my sampling just as before with the depth buffer just modified to use the normalized z values instead of the normalized 1/z values in the normal depth buffer because I want the distance to the eye point and the values in the buffer would be the normalized distance to the view plain. Does this sound correct?
#6
10/01/2013 (7:50 am)
@Paul

You wanted me to post on my performance results. I wasn't able to do enough raycasts using T3D raycasts but I was able to hit my goal of 50,000 raycasts per second with a 80fps render rate using PhysX raycasts, down from 130 -180fps. The fps does drop a little bit with the robot is moving. This was with lighting set to basic. I still think in the long run I would like to raycast into the depth buffer because I'm not going to pick up details like the sprite grass or leaves on the tree without it and the play is just a pill shape in the PhysX world.