Game Development Community

Get vertex shader parameter value from main app

by Gerardo Glz · in General Discussion · 06/13/2008 (5:01 am) · 4 replies

Hey,
I hope if the question fits in this forum section...
I wonder if there is any way to read a parameter value from a vertex shader. Specifically, i want to modify a variable within the shader and then read it from the main application. Something like:

struct AppData
{
float4 Pos : POSITION;
};

struct VertexOut
{
float4 Pos : POSITION;
};

VertexOut main(AppData IN, uniform float4x4 ModelViewProjMatrix, out result)
{
VertexOut OUT;
OUT.Pos = mul(IN.Pos, ModelViewProjMatrix);

result = mul(IN.Pos, ModelViewProjMatrix);

return OUT;
}

In the main application I want to read the variable 'result'.

Is it possible to do it in nvidia cg or GLSL?

Thanks in advance!

About the author

Recent Threads


#1
06/13/2008 (6:10 am)
Unfortunately, that's not how it works. In general, GPUs are heavily geared towards unidirectional data flow *towards* the GPU. Each type of shader takes a specific place in the GPU's processing pipeline and operates on the data that is fed into the top of it.

What you *could* do is define a render target and have the pixel shader write the transformed coordinates to the target buffer. This buffer could then be read.

However, if it's just a matrix multiplication you want to have done like in your code snippet above (which redundantly does it twice BTW), then any speed gained on the computation itself will be lost on the tremendous overhead involved.

Taking the example above, what's your intention here?
#2
06/13/2008 (9:22 am)
Hey Rene,

Thanks for the reply. I used the code snippet just to test if i could read any parameter value from the vertex shader.

Actually the idea is more complex than that. I have got already a raycasting-similar procedure to project vertices in a model to a camera viewport and avoid occlusion (I'm using voxels and can't use opengl zbuffer per se due). I have the matrix-vector multiplications running in a main app. but it takes a bit of time (around 2 secs) to perform all the maths as I'm using matrix inversions, vector transformations, etc. in a model with lots of selected points.

I wanted to feed the vertex shader with some matrices and voxel positions (input as float4 positions) and let the shader perform all the operations using cg internal matrix and vector functions taking advantage of the HW matrix inversion and maths functions. Then get the final 3D vector result and calculate other processes in the main app to read/write some voxels, all of this in almost real time.

I considered using Cuda to perform matrix-vector operations and free the main processor from this task but vertex shaders seemed to be a good solution for the problem (if I only could read the final vector positions from the GPU using a function like cgGetParameterValuefr ).
#3
06/13/2008 (11:59 am)
Hmmm, interesting problem. CUDA definitely is an option if restricting yourself to NVIDIA GF8+ is okay for you. Otherwise, I think you could achieve pretty much the same effect--though with a bigger performance penalty--by doing the calculations on the GPU and reading out the results from a texture.

Intertwining CPU and GPU calculations traditionally has been a PITA since the GPUs really aren't made for this kind of thing. Things seem to shift towards opening up somewhat here, though.
#4
06/13/2008 (12:14 pm)
Yeah, I truly believe what you say about CPU and GPU interwining, but so far I think it's the only option available for this type of application. Also, CUDA seems to be overcomplicated for this bit of code.

I found a GPGPU tutorial by Dominik Goddeke that gives me some ideas about your recommendation of reading the results from a texture. I'll give it a go during these days.

Thanks a lot!!