Weapon Prediction
by Eric Smith · in Torque 3D Professional · 03/23/2010 (11:01 am) · 5 replies
I'm trying to figure out how the prediction works in T3D. I have a weapon that creates a projectile when fired. When I run the game and single player, there is no delay when firing the weapon. When I run a dedicated server, connect then fire the weapon, there is a slight delay. weapon::onFire() is in the server scripts so I assume that its the server creating the projectile thus the delay. However, if I move this function over to the client, the delay is still there.
I'm not exactly sure how prediction works for weapons in torque. Has anybody overcome or have a solution to this problem?
I'm not exactly sure how prediction works for weapons in torque. Has anybody overcome or have a solution to this problem?
About the author
http://ca.linkedin.com/pub/eric-smith/2/56/a51
#2
I guess I'm going to have to somewhat redo the weapons system to use some sort of prediction and perhaps even implement hit scan.
03/25/2010 (12:49 pm)
Thanks for replying with a very concise explanation. I did a mod on the HL engine some years ago called Firearms so I know how it works in that engine. I havent done anything with Source so I dont know how much its changed. But its the reason why I ask as Im redoing Firearms on T3D. Since FA has lots of ballistic based weapons, it would be very prudent for me to have some sort of prediction otherwise gameplay fails.I guess I'm going to have to somewhat redo the weapons system to use some sort of prediction and perhaps even implement hit scan.
#3
From what I'm aware of, the prediction system in Source is basically the same as the original HL engine, though I'm sure it's gone through some refinements over time. That system, I believe, still essentially traces back to the unofficial QuakeWorld binary at some level.
This is something I'd love to do some work on at some point, unfortunately that just adds it to a very long list of work I can probably get away with skipping for my current project.
The main issue will be doing animation back-interpolation. For really accurate hitbox detection on hitscan weapons, the server needs to actually rewind the animation of the player models near the projectile's path (we can skip anything way outside the possible hit zone). Just moving the player object back to where it was, say, 100ms ago is insufficient if the guy was also in the process of standing up from a crouch. If the animation isn't interpolated correctly, a headshot turns into a gut-shot, or the other way around.
I'm pretty certain animations do get played on the server, as there are a number of door resources based on the conept of animated collision boxes. So the main concern would be recording a certain number of keyframes, maybe the past 4 ticks (32 ms/tick, so that's 128ms of possible lag compensation -- this could be adjusted as needed, maybe laggier players would store more keyframes). Interpolating back through these keyframes to the exact moment of fire wouldn't be 100% accurate, but it would be really close (Source/HL aren't 100% accurate either, hence all the "OMG BAD HITBOX HEADSHOTTED" complaints you may see out there). Combined with storing rotations and positions of the past few ticks, you could fairly effectively interpolate to the exact state a target was in during a fire event.
IE, a player with 50ms lag (total 2-way trip delay) fires at time 200. The firer's client predicts the shot based on the current on-screen state, detects a hit, and renders a blood splash (still at 200). Server does a raycast at time 225, however we rewind all players in the possible hit zone (or just everyone, depending on how expensive this process ends up being) to their states at time 200. Server detects a hit on another player, sends a packet back to the client. A packet arrives from the server at time 250 confirming the hit; no additional action is taken by the client. Had the packet returned no hit (ie, a hit vs non-player surface) we could have canceled the blood effect at this point, and it would have only been incorrectly visible for 50ms. This could happen due to small inconsistancies between the on-screen state and the interpolated server state. Whew.
Beyond this, some testing (or specific details from someone who actually knows) would be needed to determine how the current on-screen position a client sees another player at relates to the current server position. My *assumption* is that this is predicted so that it's identical, but I'm not certain.
03/31/2010 (1:57 am)
Firearms was an excellent HL mod... I hate to be the one to bring you the bad news about how much work will need to be done to bring it over to Torque as an effective multiplayer game. Obviously how much you'll care about this issue relates to the level of accuracy you need -- ie if you're ok with favoring automatic weapons over sniper rifles and such, you could get away with just the client-side visual effects prediction, but serious multiplayer FPS veterans will likely still complain eventually.From what I'm aware of, the prediction system in Source is basically the same as the original HL engine, though I'm sure it's gone through some refinements over time. That system, I believe, still essentially traces back to the unofficial QuakeWorld binary at some level.
This is something I'd love to do some work on at some point, unfortunately that just adds it to a very long list of work I can probably get away with skipping for my current project.
The main issue will be doing animation back-interpolation. For really accurate hitbox detection on hitscan weapons, the server needs to actually rewind the animation of the player models near the projectile's path (we can skip anything way outside the possible hit zone). Just moving the player object back to where it was, say, 100ms ago is insufficient if the guy was also in the process of standing up from a crouch. If the animation isn't interpolated correctly, a headshot turns into a gut-shot, or the other way around.
I'm pretty certain animations do get played on the server, as there are a number of door resources based on the conept of animated collision boxes. So the main concern would be recording a certain number of keyframes, maybe the past 4 ticks (32 ms/tick, so that's 128ms of possible lag compensation -- this could be adjusted as needed, maybe laggier players would store more keyframes). Interpolating back through these keyframes to the exact moment of fire wouldn't be 100% accurate, but it would be really close (Source/HL aren't 100% accurate either, hence all the "OMG BAD HITBOX HEADSHOTTED" complaints you may see out there). Combined with storing rotations and positions of the past few ticks, you could fairly effectively interpolate to the exact state a target was in during a fire event.
IE, a player with 50ms lag (total 2-way trip delay) fires at time 200. The firer's client predicts the shot based on the current on-screen state, detects a hit, and renders a blood splash (still at 200). Server does a raycast at time 225, however we rewind all players in the possible hit zone (or just everyone, depending on how expensive this process ends up being) to their states at time 200. Server detects a hit on another player, sends a packet back to the client. A packet arrives from the server at time 250 confirming the hit; no additional action is taken by the client. Had the packet returned no hit (ie, a hit vs non-player surface) we could have canceled the blood effect at this point, and it would have only been incorrectly visible for 50ms. This could happen due to small inconsistancies between the on-screen state and the interpolated server state. Whew.
Beyond this, some testing (or specific details from someone who actually knows) would be needed to determine how the current on-screen position a client sees another player at relates to the current server position. My *assumption* is that this is predicted so that it's identical, but I'm not certain.
#4
Took a look at the animation code, realized I was wasting time (though I did notice a few places where you'd need to force Players to server-animate more types of actions for accurate hitboxing).
The easiest thing to do is skip animation threads themselves and just grab the full transform matrix of each hitbox object in the player model every tick and store that. Then store the object's transform matrix with it and you now have all the data you need to do the backpeddling.
The only confusing bit now is how to go about doing a raycast on a different player state without actually changing the positions of parts of the server player object itself. Maybe it would be easier to simply create a set of dummy objects within the hitscan weapon's raycast itself at the appropriate positions, then translate any hit against this phony geometry back to the player object itself. IE, the raycast hit the interpolated dummy head hitbox of player obj 3050. Treat this like a hit against the actual obj 3050 hitbox 0. Ignore any hits to "real" player objects, which would constitute lag-induced "bad hits."
That probably isn't expressed very clearly, but it actually sounds quite doable at this point to me. I'll explore this more tomorrow and see if I can modify the raycasting to work against a set of "fake" geometry objects. Anything I manage to do here will go out to the community, so I encourage anyone who knows more about working with/raycasting vs arbitrary geometry sets in Torque to toss out some clues.
For the record, I'm using a specific hitbox resource that's on the site here. It's pretty decent, I can look up a link if anyone needs it. It basically just defines objects with certain naming conventions in the model as hitbox geometry, which raycasts vs players will then hit, and modifies the Projectile's onCollision method to return a numerical value representing which box was hit.
edit w/ some code theory
This is getting too specific and is actually code from the hitbox resource, but I believe this is where the backwards interpolation offset could be easily applied when doing the raycasting.
Calling a raycast goes to the container raycast. In effect, it cycles through every object in the scene which has collision geometry and invokes that object's castray function.
To reach this point, the earlier check for raycast against bounding box (top of Player::castRay) would have already taken into account the player object's transform offset. At this point, we would need to apply the (world-space) transform offset of the hitbox in question.
This code (from the hitbox resource's added function TSShapeInstance::castRayEA) is where the world-space raycast vector gets translated into the local-space of the hitbox submesh. Instead of using the current transform of this hitbox, we'll simply substitute our interpolated transform matrix! It's actually pretty lucky the raycast work ends up being done in local-submesh-space, because it means we can move the ray instead of the object and avoid any issues with changing or duping the server state.
It's actually impressively simple, which means I'm likely missing something critical. We'll find out tomorrow when I take a more rational, less sleep-deprived look at the issue and try to actually implement something. One bit I've probably missed is that we also want to fire the initial ray from the appropriately back-peddled state of the attacker, in case he was run-n-gunning. Compared to the hitbox animation stuff, that's easy.
03/31/2010 (2:24 am)
I should probably have edited the previous post and replaced it w/ this updated info, but I'd rather leave the thought process trail, if only for my own reference (and just adding it exceeded the post limit =P).Took a look at the animation code, realized I was wasting time (though I did notice a few places where you'd need to force Players to server-animate more types of actions for accurate hitboxing).
The easiest thing to do is skip animation threads themselves and just grab the full transform matrix of each hitbox object in the player model every tick and store that. Then store the object's transform matrix with it and you now have all the data you need to do the backpeddling.
The only confusing bit now is how to go about doing a raycast on a different player state without actually changing the positions of parts of the server player object itself. Maybe it would be easier to simply create a set of dummy objects within the hitscan weapon's raycast itself at the appropriate positions, then translate any hit against this phony geometry back to the player object itself. IE, the raycast hit the interpolated dummy head hitbox of player obj 3050. Treat this like a hit against the actual obj 3050 hitbox 0. Ignore any hits to "real" player objects, which would constitute lag-induced "bad hits."
That probably isn't expressed very clearly, but it actually sounds quite doable at this point to me. I'll explore this more tomorrow and see if I can modify the raycasting to work against a set of "fake" geometry objects. Anything I manage to do here will go out to the community, so I encourage anyone who knows more about working with/raycasting vs arbitrary geometry sets in Torque to toss out some clues.
For the record, I'm using a specific hitbox resource that's on the site here. It's pretty decent, I can look up a link if anyone needs it. It basically just defines objects with certain naming conventions in the model as hitbox geometry, which raycasts vs players will then hit, and modifies the Projectile's onCollision method to return a numerical value representing which box was hit.
edit w/ some code theory
This is getting too specific and is actually code from the hitbox resource, but I believe this is where the backwards interpolation offset could be easily applied when doing the raycasting.
Calling a raycast goes to the container raycast. In effect, it cycles through every object in the scene which has collision geometry and invokes that object's castray function.
To reach this point, the earlier check for raycast against bounding box (top of Player::castRay) would have already taken into account the player object's transform offset. At this point, we would need to apply the (world-space) transform offset of the hitbox in question.
// set up for first object's node
MeshObjectInstance * mesh = &mMeshObjects[HBIndex];
mat = mesh->getTransform();
mat.inverse();
mat.mulP(a,&ta);
mat.mulP(b,&tb);
// collide...
if (mesh->castRayEA(od,ta,tb,rayInfo, mMaterialList))This code (from the hitbox resource's added function TSShapeInstance::castRayEA) is where the world-space raycast vector gets translated into the local-space of the hitbox submesh. Instead of using the current transform of this hitbox, we'll simply substitute our interpolated transform matrix! It's actually pretty lucky the raycast work ends up being done in local-submesh-space, because it means we can move the ray instead of the object and avoid any issues with changing or duping the server state.
It's actually impressively simple, which means I'm likely missing something critical. We'll find out tomorrow when I take a more rational, less sleep-deprived look at the issue and try to actually implement something. One bit I've probably missed is that we also want to fire the initial ray from the appropriately back-peddled state of the attacker, in case he was run-n-gunning. Compared to the hitbox animation stuff, that's easy.
#5
04/12/2010 (3:33 pm)
Thanks Henry for your reply. You give me alot to think about. I'll have to poke around and see what I can do with this.
Torque Owner Henry Todd
Atomic Walrus
Basically, the ShapeImage (the weapon) is a state machine that cycles through different states, eventually getting to the fire state and triggering the onFire method in script. For the sake of server authority this must be a server-side event. As far as I know, there's no prediction being done here by the client -- the projectile is created on the server, then ghosted to the client in the next (hopefully) packet. This is probably fine for the types of weapons the system was originally designed for: Tribes' relatively low-velocity projectiles.
Games that rely on bullet dynamics skip the concept of a projectile "object" entirely and go straight to hitscan weapons. In other words, they simply cast a ray during the fire event and determine the hit from there. I like to use the Source engine as an example, because it's been refined to handle latency-prone multiplayer games in Counter-Strike, a game which generally requires a high degree of accuracy.
Source will actually use current latency to do a bit of -- I believe they call it lag compensation -- reverse prediction. Basically, the server says "this fire event came in at time T, but that client has 40ms of latency, so let's rewind the sim state to T-40 and do the hitscan from there." This avoids any lag-related target "leading" that older games required. Since hitboxes are involved, the server also has to reverse-animate the player models.
Additionally, when the player initiates the fire event, the client does all of the effects for a fire event before waiting for server confirmation. This avoids any visible lag between pressing the button and hearing/seeing the shot. The server may sometimes override the fire event for some reason (maybe the client ammo count was wrong, and the player was actually out of ammo), but generally this won't happen. If it does, the next incoming packet will correct the client.
On top of that, every client-side object position is predicted and interpolated based on the last incoming packet (server position, direction of movement) and, again, the amount of latency. This keeps players on your screen where they, more or less, really are.
Back to Torque.. I'm not 100% sure what the prediction and interpolation system actually does, but here's my understanding: It appears to simply predict client-side ghosts ahead to last server tick + latency. That leaves you aiming at where things currently are (or at least a best guess), just like in Source, but without any lag compensation in the weapons themselves, you'll find yourself missing a moving target when you aim directly at it, regardless of projectile speed.
For this example, T = the actual time at which the client picks up a trigger button press, L = this client's latency
To solve half of the issue, the client would create its projectile ghost the instant you press the trigger button (T), then the server would create the projectile as soon as it gets the event (T + L), but sim it ahead Lms in the first tick to make up for the "lost" lag time."
This is still, however, not a perfect solution. While it does result in a zero-lag projectile (it covers the distance it lost during the lag in the first tick of its sim), the target could still have moved between the actual client fire time and this first server tick, resulting in an "incorrect" miss. Optimally, the first tick of the server projectile object would occur in a "rewound" state, where all the objects have been moved back to their positions at time T. Then we sim forward Lms, all in this 1 tick, until we're finally back in sync. (Technically, I think Source only applies this backpedeling to Player objects to keep things simple.)
The first half would be pretty easy to implement. The second... probably not quite so simple. As far as I know Torque has no simple way to just call up a state from the past on the server (if it did we could've fixed the Rigid class collision sim years ago). Anyway, that first bit would certainly reduce the appearance of lag, but you're still forcing players to compensate for it in how they aim.
For the record, I'm probably completely missing some key aspects of Torque's networking. One of the devs could certainly provide a more reasonable explaination of Torque's prediction systems, and it may turn out it's more advanced than what I'm aware of. I just thought I'd share what I "know" on the subject.