Game Development Community

Screenspace to Worldspace

by Lukas Joergensen · in Torque 3D Professional · 06/16/2014 (5:46 am) · 11 replies

Hey guys! I've got another shader question for ya.

I've got a shader with the following code:
float3  center_pos  = tex2D(smp_position, tex).xyz;
	float3 eye_pos = g_mat_view_inv[3].xyz; // g_mat_view_inv is ViewInverse

	float  center_depth  = distance(eye_pos, center_pos);
	float radius = g_occlusion_radius / center_depth;

I tried to convert this to a T3D-shader with:
float4 prepass = prepassUncondition( prepassMap, uv0 );
	float3 normal = prepass.xyz;
	float depth = prepass.a;
	float3 ep = float3(uv0, depth);
	float3 center_pos = mul(matScreenToWorld, ep);
	float radius = occlusionRadius / distance(eyePosWorld, center_pos);

However my shader is not turning out as I'd like.. Am I doing anything wrong here?

#1
06/17/2014 (9:17 am)
Okay just to rephrase the question:

I'm trying to get the world-space coordinates from screenspace coordinates and depth, these are the 3 methods I have tried:
float3 ep = float3(uv0, depth);
// First method,
	float4 center_pos = float4(mul(matScreenToWorld, ep).xyz, 1);
// Second method
	center_pos.xy = ep.xy * (ep.z  * nearFar) / (worldToScreenScale / texSize0);
	center_pos.z = ep.z * nearFar.y;
// Third method
	center_pos.x = ep.x * 2.0f - 1.0f;
	center_pos.y = -(ep.y * 2.0f - 1.0f);
	center_pos.z = ep.z;
	center_pos.w = 1.0f;
	center_pos = mul(center_pos, matScreenToWorld);
	center_pos /= center_pos.w;
#2
06/17/2014 (10:58 pm)
http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html

float z = prepassUncondition( prepassTex, IN.uv0 ).a;

float4 scrpos = float4(IN.uv0.x*2.0 - 1.0, IN.uv0.y*2.0 - 1.0, z*2.0-1.0, 1.0);

float4 D = mul(scrpos , matWorldToScreen);
float3 worldPos = D.xyz / D.w;

Keep in mind screen to world is not 100% correct, because the whole operation is irreversible and the hardware gpu precision does matter.
#3
06/18/2014 (12:33 am)
Thanks a ton for the help Ivan! However it doesn't seem to work..
This is my shader body:
float4 prepass = prepassUncondition( prepassMap, IN.uv0 );
	float3 normal = prepass.xyz;
	float depth = prepass.a;

	// Early out if too far away.
	if (depth > 0.99999999)
		return float4( 0,0,0,0 ); 
	float3 ep = float3(IN.uv0, depth);
	float4 center_sPos = float4(ep * 2.0 - 1.0, 1.0);
	float4 D = mul(center_sPos, matWorldToScreen);
	float3 center_pos = D.xyz / D.w;
	return float4(center_pos,1);

And it just returns a single color which seems to be based on the position of the camera. (E.g. watching the scene from below returns yellow)

Why is it matWorldToScreen and not matScreenToWorld?
#4
06/18/2014 (11:52 am)
Okay so az on IRC helped me figure a little more out, the following code works perfectly fine for calculating the position of the current pixel:
center_pos.xyz = eyePosWorld + IN.wsEyeRay * depth;

Using that I came to realize that it seems like
float4 center_sPos = float4(IN.uv0 * 2.0 - 1.0, 1.0);
float4 D = mul(center_sPos, matWorldToScreen);
float3 center_pos = D.xyz / D.w;
For some reason calculates the eye position, indifferent from which pixel it's currently at. So you just get a uniform color on the whole screen which seems to be indicative of the current position of the eye..

I don't know I guess T3D sucks, but I'd really like to find a way to calculate position from based on pixel coordinates and depth.. Any ideas?

Edit:
Also it seems like matScreenToWorld = Projection * World;
And matWorldToScreen = inverse(matScreenToWorld);

I hope that might help.
#5
06/19/2014 (1:45 pm)
Filed an issue as this seems to be an issue with the stock MotionBlur shader as well.
#6
06/26/2014 (7:02 am)
I've tried doing this:
float3 center_pos;
current_pos = eyePosWorld + IN.wsEyeRay * depth;
//...
float2 screenVector = nearFar.y*depth*(sampleUV.xy-IN.uv0.xy)*texSize0/worldToScreenScale;
		sample_pos = eyePosWorld + normalize(normalize(IN.wsEyeRay)+float3(screenVector,0))*nearFar.y*sample_depth;
Not sure how to interpret the results, looks like it is sort of working and sort of not working.. Anyone see anything wrong with the math though?
#7
07/01/2014 (1:16 pm)
Lukas, a year ago I came across this same problem, I left thinking I did not know enough. But we may have a problem with this.

MotionBlur shader are very wong, seem it's writing to force/fix the incorrect depth values.

Thanks for reporting the problem. I'll try to spend some time ... I do not think it will be soon :(


#8
07/01/2014 (3:21 pm)
@Luis I'm glad I'm not alone with this issue, I've been working my ass off trying to solve it. Tried custom math and several examples around the net but nothing works, I believe that this might be an issue with the matrices T3D sends to the PostFx but I can't confirm it.
#9
07/18/2014 (2:13 pm)
Lukas, you can take a look at my old SSAO2 resource (www.garagegames.com/community/resources/view/21060). It's been three years since I've touched it so my memory may be fuzzy, but it converts screen-space coordinates into 3D view-space coordinates, which sounds like what you want (not sure how you would get absolute world-space coordinates, but you don't actually need them for SSAO or SSGI). It creates an additional G-buffer which maps screen-space coordinates to the correct eye vector (the shaders are called SSAO_Pos_V and SSAO_Pos_P). Perhaps you can use those to get what you need.
#10
07/18/2014 (5:09 pm)
@Ryan Thanks, that was a pretty simple and obvious solution! Unfortunately, it doesn't really solve the deeper issue of not being able to go translate ScreenSpace to WorldSpace.

But at the very least your SSAO resource gave me even more tools to work on my SSGI and SSDO stuff :P
#11
07/18/2014 (6:44 pm)
I had a gut feeling you could get world-space coordinates using the method from my SSAO2, but it just wasn't coming to me. It just dawned on me, however, that you can easily get the absolute world-space coordinates of a pixel using my resource. The additional G-buffer I mentioned stores, as an image, all the vectors from the camera to the corresponding screen-space coordinate location on the camera's far plane, in view-space (think of it as storing all the rays that emanate from the camera toward each visible pixel in the scene). My SSAO2 resource calculates each pixel's location in view-space by sampling this additional G-buffer to get the correct view-space vector, normalizing it, and then multiplying it by the depth from the depth buffer to get the view-space coordinate. At this point all you need to do to get the world-space coordinate is multiple by the inverse WorldView matrix, which you should already have access to.

This extra matrix multiplication is totally unnecessary, though, as I mentioned before. Everything can be done in view-space with the same accuracy as in world-space... you just have to convert your algorithms to work in view-space. :) If you want to stay in world-space, though, this would work for what you're trying to do. The only downside is the extra load on the graphics card, creating and storing another fullscreen G-buffer. It works fine on newer hardware, but maybe not so well on older stuff. But having another G-buffer also provides another avenue for passing information downstream... the alpha channel could be used for whatever you wanted.

If you have any questions, please let me know. I've been wanting to modify my SSAO2 resource to do SSDO and SSGI for a long time, but I haven't been developing with Torque for quite a while now. So I'd love to see someone else get it accomplished. :)