Game Development Community

Rendering the weapon image above everything

by Bryce · in Torque Game Engine · 12/03/2009 (9:34 pm) · 20 replies

Hey everyone

This might be a lot better than the ugly weapon pushback thing 1.4 has going...Would there be a way to tell the rendering code to draw the first person weapon image above EVERYTHING in the scene, so we won't see it clip into objects? Just a thought, anybody have ideas?

Thanks!

#1
12/03/2009 (10:45 pm)
EDITED: Yeah.. I'm just going to replace this entire post. It had a bug, and the EndSort idea could have interfered with the fxSun anyway. The basic idea still remains though. That being: modify the shapeImageRenderImage sorting if isFirstPerson(). Just going to do it a bit differently this time.

First, in shapeBase.cc find ShapeBase::prepRenderImage(). Scroll down a bit and find a block like this:

for (U32 i = 0; i < MaxMountedImages; i++)
      {
         MountedImage& image = mMountedImageList[i];
         if (image.dataBlock && image.shapeInstance)
         {
            DetailManager::selectPotentialDetails(image.shapeInstance,dist,invScale);

            if (mCloakLevel == 0.0f && image.shapeInstance->hasSolid() && mFadeVal == 1.0f)
            {
               ShapeImageRenderImage* rimage = new ShapeImageRenderImage;
               rimage->obj = this;
               rimage->mSBase = this;
               rimage->mIndex = i;
               // BEGIN CHANGES
               //rimage->isTranslucent = false;
               if (isFirstPerson())
               {
                  rimage->isTranslucent = true;
                  rimage->sortType = SceneRenderImage::Point;
                  rimage->poly[0] = state->getCameraPosition();
                  rimage->tieBreaker = true;
               }
               else
                  rimage->isTranslucent = false;
               // END CHANGES
               rimage->textureSortKey = (U32)(dsize_t)(image.dataBlock);
               state->insertRenderImage(rimage);
            }

            if ((mCloakLevel != 0.0f || mFadeVal != 1.0f || mShapeInstance->hasTranslucency()) ||
                (mMount.object == NULL))
            {
               ShapeImageRenderImage* rimage = new ShapeImageRenderImage;
               rimage->obj = this;
               rimage->mSBase = this;
               rimage->mIndex = i;
               rimage->isTranslucent = true;
               rimage->sortType = SceneRenderImage::Point;
               rimage->textureSortKey = (U32)(dsize_t)(image.dataBlock);
               // ANOTHER CHANGE IS:
               if (isFirstPerson())
                  rimage->poly[0] = state->getCameraPosition();
               else
                  state->setImageRefPoint(this, rimage);
               // THAT'S ALL THE CHANGES
               state->insertRenderImage(rimage);
            }
         }
      }

Then further down the same file, find ShapeBase::renderObject(). Scroll down a bit to this part:

// THIS LINE MOVES UP FROM BELOW
   ShapeImageRenderImage* shiri = dynamic_cast<ShapeImageRenderImage*>(image);

   // render shield effect // <- This comment is erroneous and should be removed.

   if (mCloakLevel == 0.0f && mFadeVal == 1.0f)
   {
      // THIS CONDITION NEEDS MODIFYING
      //if (image->isTranslucent == true)
      if ((!shiri && image->isTranslucent) || image->tieBreaker)
      {
         TSShapeInstance::smNoRenderNonTranslucent = true;
         TSShapeInstance::smNoRenderTranslucent    = false;
      }
      else
      {
         TSShapeInstance::smNoRenderNonTranslucent = false;
         TSShapeInstance::smNoRenderTranslucent    = true;
      }
   }
   else
   {
      TSShapeInstance::smNoRenderNonTranslucent = false;
      TSShapeInstance::smNoRenderTranslucent    = false;
   }

   TSMesh::setOverrideFade( mFadeVal );

   // AND THIS LINE MOVED UP ABOVE
   //ShapeImageRenderImage* shiri = dynamic_cast<ShapeImageRenderImage*>(image);

   if (shiri != NULL)
   {
      renderMountedImage(state, shiri);
   }
   else
   {
      renderImage(state, image);
   }

Ok. Still untested, but my headache is a bit better now (then when I wrote the first suggestion here), so that's a good sign. :P

.. my only concern now is whether setting the sort point to the camera point will cause any trouble. Should be ok though.

Oh yeah. That should work in 1.5, 1.4, 1.3... whatever.

And I still want BOLD tags in code! See how useful they could be here?
#2
12/04/2009 (12:33 am)
This compiles with no errors, but the weapons still clip into walls. When you get time, could you do a test and see if it works for you? No rush.
#3
12/04/2009 (1:31 am)
*smacks forehead* No need to test. I know it won't work for me. Honestly, where is my head tonight? Yeah. of course they're still clipping. It's not just the render order we have to worry about. That's only half the problem. The other piece of the puzzle is the depth buffer. And that presents more of a problem. See.. even if they are rendered last, they still respect the depth buffer. If the depth buffer says the wall is closer than the end of the gun, the end of the gun is not drawn. If we simply disable the depth buffer the weapon image will not be drawn correctly, since the polygons are not depth-sorted prior to rendering.

*sigh* Sorry to say, this will really require a more complicated solution.

So what are our options? Just off the top of my head...

1) Clear the depth buffer before drawing the weapon image. -- that would work, but we need a way to be sure we're done drawing anything that might need the depth buffer before we clear it.

2) Draw the weapon image first; before anything else is drawn to the depth buffer. Then use the stencil buffer to avoid drawing anything on top of the weapon image. -- no. Any transparency in the weapon image won't be able to blend properly. forget that one.

3) Depth sort the entire mesh prior to rendering. We can then disable the depth buffer and draw the image properly. -- much more complicated. And there would be a [noticeable?] performance penalty with this approach.

I'm thinking option 1. But I'm going to want to give this more thought than I did last time. :P
#4
12/04/2009 (2:05 am)
Hmm. Bugger. Precipitation uses EndSort and it does need the depth buffer in order to render properly against world geometry. Yeah we'll really need to preserve the depth buffer somehow. That or extend the SceneState to include additional sorting levels.

Ok. I hate to have to draw anything twice, but the easiest solution is probably to overdraw the weapon image. Draw it once writing (but not reading) the depth buffer. Then draw it again to fill in the inevitable gaps created by drawing an unsorted mesh without depth buffer fully enabled. Yes it means more rendering work, but on the plus side it allow the weapon images to play nicely with precipitation and fxSun flares. And with the speed of modern graphics cards, it's probably faster to draw the weapon image twice than it would be to depth sort the entire mesh first.

So here, try this out. In shapeBase.cc, ShapeBase::renderMountedImage() find the line "image.shapeInstance->render"
image.shapeInstance->setupFog(fogAmount,state->getFogColor());
      image.shapeInstance->animate();
      // ADDITION
      if (isFirstPerson())
      {
         glDisable(GL_DEPTH_TEST);
         image.shapeInstance->render();
         glEnable(GL_DEPTH_TEST);
      }
      // END ADDITION
      image.shapeInstance->render();

Sorry I can't easily test this myself. I actually don't have a working setup that uses ShapeBaseImages. It should work though. :/
#5
12/04/2009 (9:14 am)
Still no luck :(
I do not understand how this rendering stuff works at all...
#6
12/04/2009 (1:03 pm)
Bugger. This time I'm surprised. That should have worked.

Ok. Which version of Torque are you working with?

...

Regarding this rendering stuff: It's really not terribly complicated. Which part has you confused? The OpenGL depth buffer? Torque's render path? Perhaps I can explain some of it.
#7
12/04/2009 (6:01 pm)
I'm using Torque Game Engine 1.4.2. The weapon still clips for some reason, even though it compiled just fine.

I don't understand anything about rendering...the depth buffer, the gl functions, all that fun stuff. I'm a C++ noob if you can't tell.
#8
12/04/2009 (6:48 pm)
Yeah.. that's the thing with these rendering issues. It can easily compile just fine and still not do what you expect it to. The compiler only checks to make sure you're calling the gl commands properly, not that you're calling the proper gl commands. :P

1.4.2. ok. Let me extract a 1.4.2 archive and see what more I can learn.
#9
12/05/2009 (12:07 am)
Ok. It works now. Two issues: I didn't realize that Player had overwritten renderMountedImage(), and turns out disabling the depth test didn't quite do what I though it did.

So, here ya go: The change to renderMountedImage() from post #4, rip that out; you don't need it. Then open player.cc and find Player::renderMountedImage() and implement the following addition:

image.shapeInstance->setupFog(fogAmount,state->getFogColor());
      image.shapeInstance->animate();
      // ADDITION	
      if (isFirstPerson())
      {
         S32 dfunc;
         glGetIntegerv(GL_DEPTH_FUNC, &dfunc);
         glDepthFunc(GL_ALWAYS); 
         image.shapeInstance->render();
         glDepthFunc(dfunc);
      }
      // END ADDITION
      image.shapeInstance->render();

There ya go. I'll let you go play with that now while I write up an explanation of just why that worked. :)
#10
12/05/2009 (2:29 am)
So, the basic problem is that the clipping that you're seeing is technically the "correct" way to draw the scene. Since the mounted weapon is actually being allowed to penetrate partially inside other objects, it makes sense that you should see the wall instead of the part of the weapon that is in fact behind it. Of course, this looks ugly.

The more "proper" solution (in terms of accurately simulating real objects) would be to prevent the penetration by pushing the weapon away from other objects. This I guess was implemented in 1.4; but you indicated a dislike for that solution. So the alternative is to temporarily override the mechanism by which the renderer chooses which object is "in front" and force it to draw the object we want to see.

In any complex 3D scene where you have overlapping polygons, you need a mechanism for determining which polygons are visible and which are covered. This can be done to a large degree simply by depth-sorting the polygon list prior to drawing. Depth-sorting refers to the process of sorting the polygons so that they are drawn in order beginning with those farthest from the camera and ending with those closest to the camera. ("Depth" here means distance "into" the scene. Think of the camera lens as the "surface".) This way the closer polygon is always drawn after of the more distant polygon; Thus the closer polygon is visible and the more distant polygon may be hidden. It's much like painting a scene in layers. This approach is necessary to properly render polygons that may have some transparency, because a transparent polygon needs to be "blended" with what is "behind" it. Unfortunately, this approach is computationally expensive, and in the case of non-transparent polygons it results in a lot of unnecessary "over drawing". Over drawing occurs when a pixel is drawn to the image buffer by one polygon, and then completely replaced by another polygon. Since the pixel from the first polygon is never seen, it is a waste of time to draw it in the first place. Surely there is a way to avoid that?

Enter, the Depth Buffer. Sometimes referred to as the z-buffer. The depth buffer is a second draw buffer of the same size as the image buffer on which the scene is being drawn. For every pixel drawn in the image buffer, a corresponding pixel is drawn to the depth buffer; Only, the depth buffer pixels are not drawn in color. Instead, depth buffer pixels have a single value, 0 to 1, that represents the distance of the pixel from the camera. (Actually, the range maps to the distance between the near and far clip planes.) Now when another polygon wants to draw to that same pixel, the depth value of the new pixel is calculated and checked against the existing value in the depth buffer. In this way it can be determined whether the new pixel is "in front of" or "behind" the existing pixel. This technique offers several advantages. First, we are no longer forced to depth-sort any non-transparent polygons we wish to draw; because the depth buffer prevents a more distant pixel from covering a closer one. Now we can draw non-transparent polygons in whatever order they happen to be stored in. Second, if we do any reverse sorting, or if we just get lucky and happen to draw a closer polygon before a more distant one, they depth buffer will reject the more distant pixel and avoid the more expensive color and lighting calculations that would have been needed to draw that pixel to the image buffer.

--- (oh good grief. I hit a word limit!) ---
#11
12/05/2009 (2:31 am)
--- (yep, there's more) ---

So now that we know how it all works, we can figure out how to sabotage it. However you want to approach the problem, the depth buffer must be dealt with. Because whether the end of the gun is not drawn because it is behind the wall, or the wall is drawn on top of the gun because it is in front, the results are the same. So depth buffer manipulation is a must. Render order matters as well though, because once we start messing with the depth buffer, we must fall back on the render order to determine what is ultimately drawn on top.

And so to my solution: In ShapeBase::prepRenderImage() we force Torque to treat the weapon as if it were transparent and we give its position as being right on the camera. This forces Torque to sort the weapon image so that it will be drawn after the other elements in the scene. The tieBreaker parameter is set in prepRenderImage() and used in ShapeBase::renderObject() to preserve the proper order in the case of a weapon image with a mix of transparent and non-transparent parts. This way the weapon image is still rendered in two parts, non-transparent followed by transparent so that they blend properly. (This normally works because one image part is set transparent and one not, but we've overridden that by setting them both transparent.) Then finally in Player::renderMountedImage() (which overrides ShapeBase::renderMountedImage() for Player class objects) we get to trick the depth buffer. We do this by temporarily switching the depth buffer's comparison function to a special setting (GL_ALWAYS) which will always return a "pass" on a depth buffer pixel test. This essentially disables the depth buffer test -- but crucially it still allows writing to the depth buffer. Now when we call shapeInstance->render(), the shape instance will be drawn because the depth buffer cannot reject anything right now. And (and this is important) the depth values for the new pixels representing the weapon will be written to the depth buffer (replacing the values for the wall for those pixels).

But here's the catch. We have succeeded in drawing the entire weapon; BUT, it isn't drawn correctly. Why? Because there is no depth-sorting on the weapon polygons! So without the depth buffer check the weapon itself will not come out looking correct. (If you want to see the effect of this, temporarily disable the second "image.shapeInstance->render();" line.) The important thing though, is that we DID draw the weapon onto the depth buffer. Which means as far as the depth buffer is concerned, there is no longer a wall there. So when we draw the weapon image again (this time with the depth test re-enabled) it will cover the improperly drawn weapon image and not be blocked by the wall.

So there. Congratulations if you made it through the entirety of my explanation. Maybe you even got something out of it. I rather enjoyed writing it, at any rate. :-o
#12
12/05/2009 (5:34 am)
Thank you very much for your long and clear explanation.

Very instructive and so very useful.

Nicolas Buquet
www.buquet-net.com/cv/
#13
12/05/2009 (5:39 am)
Quote:The more "proper" solution (in terms of accurately simulating real objects) would be to prevent the penetration by pushing the weapon away from other objects.
Amen to that. But hey, everything in video games is about illusion, so one more doesn't hurt ;)

Thanks for the awesome explanation - I certainly now consider myself a lot more savvy about rendering, especially where the z-buffer is concerned.

bryce - in terms of understanding how the engine works a bit better, I highly recommend reading the comments in game/gameBase.h and game/fx/fxRenderObject.h (or maybe .cc). They helped me a ton :)
#14
12/05/2009 (8:33 am)
As far as i know the skybox is rendered first. Without to use the z-buffer because all pixels of the skybox can be overwritten. After the skybox is rendered the z-buffer is active and all pixels get a depth-value. Then all solid objects are rendered and after that all objects with transparency.

@Scott Richards: I will try out your solution. When it works i have to rethink my player model because the animated hands and arms should be part of the weapon-mesh and not part of the player-mesh.
#15
12/05/2009 (12:42 pm)
Thank you very much, Scott! Very good explanation, thanks for spending so much time on this. And your solution works!

I wonder how I can nominate someone to be an associate...
#16
12/05/2009 (2:09 pm)
Quote:But hey, everything in video games is about illusion

Often the most convincing illusions involve a mix of "reality" and "trickery". The key is knowing when to be accurate and when to fake it. Heck, even our brains are in the habit of "faking it". How we perceive the world is often not entirely accurate, though it feels real. But I digress..

Quote:As far as i know the skybox is rendered first. Without to use the z-buffer...

You are correct. If you look at Sky::prepRenderImage() you'll notice it uses a special sortType, "Sky", which Torque places at the very start of the render order. (sceneState.h and sceneState.cc are where the sortTypes are defined and where the actual sorting happens. fyi)

Also, the sky is not the only scene object that has reason to not write to the depth buffer. Certain effects like lens flares and particles are drawn without writing to the depth buffer. Reason being, you don't want those polygons clipping against each other; Especially in a ParticleEmitter, where the particles are not depth sorted (to save time), you don't want to see particles clipping or blocking other particles. These effects though, will often still respect the depth buffer. That is to say they will still preform depth buffer testing when drawing their polygons; They just don't modify the depth buffer. If the depth test was not done, you would see particles drawn that are supposed to be behind walls or other objects. Disabling depth buffer writing without disabling depth buffer testing is done by calling glDepthMask(GL_FALSE); GL_TRUE will turn it back on.

Furthermore, the depth buffer can be enable/disabled altogether by calling glEnable/glDisable(GL_DEPTH_TEST); Yes, you're correct, they SHOULD have named that flag GL_DEPTH_BUFFER, because GL_DEPTH_TEST implies that we are only disabling the TESTING and not the writing, when in fact we are disabling both. But they didn't design it that way. Go figure. :[

Sadly, there is no simple "intuitive" function to disable testing without disabling writing. Hence the trick used above where we set the test function to GL_ALWAYS, basically telling OpenGL "Yes, do the test but ALWAYS return true". *rolleyes* Design by committee perhaps. :P

Quote:i have to rethink my player model because the animated hands and arms should be part of the weapon-mesh...

That may be one solution. You might also consider extending this trick to prevent the Player itself from ever clipping into world geometry if isFirstPerson. That would be marginally more complicated, since you would have to insert additional SceneRenderImages in the proper order to achieve the desired effect. What you'd be looking to do is: First render both the Player and mounted images with the depth buffer-override trick. Then re-enable the buffer and again render the Player and mounted images. Certainly, it could be done.

Anyhow. You're welcome, bryce. That was a fun little experiment for me. I enjoy digging into the graphics end of the engine; and if you hadn't notice I enjoy explaining it even more. :-o

I'm going to stop now, because I've used up my allotment of emoticons.
#17
12/05/2009 (2:52 pm)
@Scrott Richards:

Quote:You might also consider extending this trick to prevent the Player itself from ever clipping into world geometry if isFirstPerson.

The whole player? I am not sure because i have always multiplayer in mind. Maybe all other (ghosted) players will also be clipped.

I thought about, instead to use isFirstPerson, to make the clipping only if the object is the ControlObject.

But like i said, i am not sure.
#18
12/05/2009 (3:09 pm)
isFirstPerson is really the condition you'd want to use. We're talking about a rendering trick that forces an object (Player) to be rendered on top of everything else in the scene. This is only really a good idea if the camera is supposed to be "inside" the Player, because that is the only situation where you could be certain that nothing will be between the Player and the camera. Consider what happens if you tie this effect to isControlObject() and switch to Third Person view. You're still in control of that player, but the camera is some distance away now. If you were to back up against, let's say a crate, the crate should block the Player's legs. But it won't if we render the Player on top of everything. The Player's legs would be drawn over the crate and that wouldn't look right at all. So, isFirstPerson() is the only safe condition for such a trick.

Sadly, if your game is intended to be played in a Third Person view, this trick is just not a viable option. In that care extending the Player bounding box to encompass the arms would probably be a better approach.

Multiplayer though, will work just fine. Since isFirstPerson() is only true for each person's own Player, this trick will have no affect at all on the way that your computer draws your opponents.

(Note that the function isFirstPerson() actually includes an isControlObject() check. The difference is that isFirstPerson() imposes the additional condition that the camera be in a First Person position.)
#19
12/05/2009 (3:25 pm)
Actually, depending on the exact position of the Player's "eye", rendering the entire Player using this trick could still be problematic even in First Person view. If you were to walk up to a small object, like perhaps a crate or low wall, and look down you might be in a position where the object should be blocking the view of your toes. If we employed this rendering trick, then your toes would instead be drawn over the object that is supposed to be blocking them.

Whether or not that is a problem would depend on the exact shape of your Player, the position of the eye node, and the size of the Player's bounding box. You would have to evaluate on a case-by-case basis then whether that would be a good idea for your particular game. .. You should be safe though as long as the camera/eye node stays within the bounding box.
#20
12/05/2009 (4:13 pm)
I dont want to use the third person view. Always first person view.