Game Development Community

Anaglyph 3D post effect

by Adam Beer · in Torque 3D Professional · 02/04/2010 (6:03 pm) · 17 replies

Would anyone know how to convert this full screen shader into a post effect:

www.torquepowered.com/community/forums/viewthread/62714

#2
02/26/2010 (3:59 am)
Copy this to game->core->scripts->client->postFx.cs
singleton ShaderData( PFX_Anaglyph )
{   
   DXVertexShaderFile 	= "shaders/common/postFx/postFxV.hlsl";
   DXPixelShaderFile 	= "shaders/common/blurP.hlsl";
           
   pixVersion = 2.0;
};

singleton PostEffect( AnaglyphFx )  
{  
   isEnabled = true;    
   allowReflectPass = false;  
         
   renderTime = "PFXAfterBin";  
   renderBin  = "AL_FormatToken_Pop";     
     
   shader = PFX_Anaglyph;   
   texture[0] = "$backBuffer"; 
      
   renderPriority = 10;  
 };

blurP.hlsl

#include "shaders/common/postFx/postFx.hlsl"

 uniform sampler2D backBuffer : register(S0);

float4 main(PFXVertToPix IN) : COLOR0
{

 float sampleDist0 = 0.02;  
 float2 samples[1] = {0.3, 0.3};  
 float2 samples2[1] = {-0.3, -0.3};  
   
 float4 sum = tex2D(backBuffer, IN.uv0);  
    for (int i = 0; i < 1; i++){  
       float4 col1 = tex2D(backBuffer, IN.uv0 + sampleDist0 * samples[i]);  
       float4 col2 = tex2D(backBuffer, IN.uv0 + sampleDist0 * samples2[i]);  
       float4 red = {1,0,0,1};  
       float4 blue = {0,1,1,1};  
       sum += col1 * red;  
       sum += col2 * blue;  
       }  
         
	return sum/2;
}
#3
02/26/2010 (7:24 am)
Thank you very much Ivan, it works great.
#4
02/26/2010 (7:39 am)
How does it look?

www.ignitiongamespm.com/uploads/Adam/anaglyph3dtest.jpg
#5
02/26/2010 (9:19 am)
By the way motion blur can be created using the scene depth + PrevScreenToWorld/ScreenToWorld - GPU Gems 3 book , Chapter 27
They are already exposed by the postFx system.
#6
05/22/2010 (8:33 am)
This is dope, im gonna try this when I get home, I dont need to modify the c++ right (im on binary), i just need to make a new shader file and a script file and it should work right?
#7
05/22/2010 (11:51 pm)
Cool, so I went ahead and implemented this, here's a couple screenies:

hayesnewmediadesign.com/comps/anaglyph1.png
hayesnewmediadesign.com/comps/anaglyph2.png
Thanks so much for this, my teacher has been experimenting w/ anaglyph rendering in Max, and I'm gonna be presenting my capstone project that I made with Torque on Tuesday so it'll be sweet to whoop this out on him, lol.

edit: BTW I altered the blur distance, and flip-flopped the red and cyan it seems to look better for me, here's my little tweak:

#include "shaders/common/postFx/postFx.hlsl"

 uniform sampler2D backBuffer : register(S0);

float4 main(PFXVertToPix IN) : COLOR0
{

 float sampleDist0 = 0.03;  
 float2 samples[1] = {0.3, 0.3};  
 float2 samples2[1] = {-0.3, -0.3};  
   
 float4 sum = tex2D(backBuffer, IN.uv0);  
    for (int i = 0; i < 1; i++){  
       float4 col1 = tex2D(backBuffer, IN.uv0 + sampleDist0 

* samples[i]);  
       float4 col2 = tex2D(backBuffer, IN.uv0 + sampleDist0 

* samples2[i]);  
       float4 red = {0,1,1,0};  
       float4 blue = {1,0,0,0};  
       sum += col1 * red;  
       sum += col2 * blue;  
       }  
         
	return sum/2;
}
#8
05/24/2010 (5:55 am)
It looks anaglyph alright, but is it really 3D? The offsets on objects close to the camera look the same as objects far away.

I think it is possible to generate stereo images from depth in real-time, but it's probably far from trivial.
#9
05/24/2010 (6:12 am)
Perhaps sampling the depth map could directly affect the uv offset when sampling the backbuffer for each side - we'd just have to define a sensible maximum offset.

Also, is it needed to shift things on the v coordinate? I could be wrong, but since the eyes are leveled, to me it only makes sense to offset on the u coordinate. From the code it looks like it looks weird with the glasses on. :)
#10
05/24/2010 (7:08 am)
No one has ported to the T3D beta1?
I mean as a resource
#11
05/24/2010 (2:05 pm)
I wonder also if it's really 3D while looking through glasses - or if it looks weird when doing so. I would think you need some way of determining depth in order to offset for this to look truly 3D.
#12
05/24/2010 (2:55 pm)
Its not 3d no. The red/cyan copy of the backbuffer is being offset diagonally instead of horizontally. You can see what I mean by looking at my image above and looking at the tail of the airplane. If those red/cyan copies were at least aligned horizontally the most this post effect could do would be to be a 2d image through the glasses.

Until the depth buffer is sampled and offsets could be used instead of just copying the backbuffer to the red/cyan channel, there wont be any 3d effect.

This would be awesome if someone were able to get this effect to work. Ill even send someone some 3D glasses if they really want to give this a try. :)
#13
05/25/2010 (5:17 am)
I can't think of a simple (aka: not incredibly shader intensive) way of doing it, and google is failing me (I found tons of articles about determining depth from stereo images, but none on the other way around).

The main problem is that the effect needs pixels to draw to from the source image into different locations on the destination images. A pixel shader can't do that: you can only read pixels from different positions, which is different from writing pixels to different positions.

The first idea that comes to mind is to first generate an horizontal offset map from depth, then in another pass fetch N samples to the right and to the left and from both the color, the offset and the depth buffer and check if the offset from the buffer matches the offset we're sampling from to decide if that sample will be used or not (the depth is used to know if the sample is in front/back of the previous sample).
#14
05/25/2010 (5:45 am)
Just thinking out loud:

It would also be interesting to learn more about other methods that would be feasible for use on the screen.

Even the use of depth map is a cheap trick here. (Alas better than not using it at all - like the solutions above.) It still won't allow you to peek behind something with one eye while the other's view is obstructed by that object - as it essentially was made with a single camera.

So its either 2 cameras and perfect 3D (but low fps) or one cam and some dirty tricks :) (playable fps).

LCD shutter glasses are more expensive, but they would probably result in a better quality effect. They would work with one camera, but the FPS would still seem to be halved. (..or would it?)
#15
05/25/2010 (8:22 pm)
My shutter glasses take only a 10 fps hit.
#16
05/26/2010 (5:08 am)
so $prepass contains depth information and is available to the post effects. Could that be used to determine the offsets?
#17
05/26/2010 (7:51 am)
Yeah there is definitely something wrong in the camera setup. there does not appear to be any parallax or convergence of the 2 images. My suggestion would be to implement 3Dvision by nVidia. To do it correctly render one set of target from one camera position then render the other set then combine the two, you are looking at prolly a more than half the fps going out the windows to do it with shaders alone. I think that the best bet is adding some sort of real stereo system.