Network lag compensation
by Duncan Gray · in Torque Game Engine · 04/02/2007 (12:24 am) · 35 replies
Firstly, if you are a nay sayer, go away and spoil someones else day :)
This thread is an offshoot from this blog
FPS games loose their fun aspect to the degree that network lag enters in. I thought of a few ways to reduce the problem and thought I'd open it for discussion.
The problem is that a player with a 200ms lag will usually suffer greatly at trying to shoot a moving target because:
(1)By the time his target appears on his screen, 200ms has already elapsed and the target on the server has continued moving.
(2)The player shoots at where the target appears on his screen (already wrong) and the info takes another 200ms to arrive at the server. By the time the server calculates his firing solution, his target is 400ms away from where he thinks it is.
I gave this some thought and came up with three ways to reduce the problem.
(a) Don't use projectiles where possible, ( No curved ballistic trajectory) Instead use line of site or laser type weapons. This will allow the firing solution to be calculated on the client using a cast-ray (client side) and a "hit" sent to the server for distribution to other clients.
At least the client will get to hit what he sees on his screen, irrespective of lag. But it does open the door for cheating clients to send fake hits ( I hate cheaters)
(b) To avoid cheating clients, let the server store a transform history ( say up to 500ms in 32ms intervals) of mobile targets such as players and vehicles etc.
When the client fires his weapon, the server checks the clients ping time, doubles it, and then checks its history for calculating the clients firing solution. That way the client still gets to hit its target but now he can't cheat. You will have to limit connections to those clients whose ping time fall within your history allocation.
(c) Introduce an overall lag compensation of about 150ms for all players. This means a formula of 150 - ping gets added to all clients whose ping is below 150. This has the added benefit of making near perfect client-server sync for all players below a 150ms ping making vehicle collisions and racing games a real pleasure. Of course the 150ms can make the game feel just a little unresponsive but the advantages may out weigh the annoyance. The value of 150ms can obviously be fine tuned to suit the game but at least a player with a 200ms lag will now only lag by 50ms.
Perhaps a combination of the last two above methods will be best.
What do you think? Discuss below.
[edit]Found some good comments from Tim Gift in this thread
This thread is an offshoot from this blog
FPS games loose their fun aspect to the degree that network lag enters in. I thought of a few ways to reduce the problem and thought I'd open it for discussion.
The problem is that a player with a 200ms lag will usually suffer greatly at trying to shoot a moving target because:
(1)By the time his target appears on his screen, 200ms has already elapsed and the target on the server has continued moving.
(2)The player shoots at where the target appears on his screen (already wrong) and the info takes another 200ms to arrive at the server. By the time the server calculates his firing solution, his target is 400ms away from where he thinks it is.
I gave this some thought and came up with three ways to reduce the problem.
(a) Don't use projectiles where possible, ( No curved ballistic trajectory) Instead use line of site or laser type weapons. This will allow the firing solution to be calculated on the client using a cast-ray (client side) and a "hit" sent to the server for distribution to other clients.
At least the client will get to hit what he sees on his screen, irrespective of lag. But it does open the door for cheating clients to send fake hits ( I hate cheaters)
(b) To avoid cheating clients, let the server store a transform history ( say up to 500ms in 32ms intervals) of mobile targets such as players and vehicles etc.
When the client fires his weapon, the server checks the clients ping time, doubles it, and then checks its history for calculating the clients firing solution. That way the client still gets to hit its target but now he can't cheat. You will have to limit connections to those clients whose ping time fall within your history allocation.
(c) Introduce an overall lag compensation of about 150ms for all players. This means a formula of 150 - ping gets added to all clients whose ping is below 150. This has the added benefit of making near perfect client-server sync for all players below a 150ms ping making vehicle collisions and racing games a real pleasure. Of course the 150ms can make the game feel just a little unresponsive but the advantages may out weigh the annoyance. The value of 150ms can obviously be fine tuned to suit the game but at least a player with a 200ms lag will now only lag by 50ms.
Perhaps a combination of the last two above methods will be best.
What do you think? Discuss below.
[edit]Found some good comments from Tim Gift in this thread
About the author
#22
Although I do feel compelled to add that our development team has members in Sweden, England, Norway, Tennessee, Australia and New Zealand, and none have complained about latency or "lag shooting" with TNL and a backbone based server in California... and our project does employ some ray cast weapons.
Maybe you just need a new ISP. :)
04/02/2007 (8:57 pm)
Heheh... in reading your latest blog post, perhaps an Aussie specific "high ping" version as well as a "continental" version might not be more appropriate. :)Although I do feel compelled to add that our development team has members in Sweden, England, Norway, Tennessee, Australia and New Zealand, and none have complained about latency or "lag shooting" with TNL and a backbone based server in California... and our project does employ some ray cast weapons.
Maybe you just need a new ISP. :)
#23
If I understand your "master" server placement statement correctly. It means that instead of one server hosting one game, it would be two servers hosting one game in 2 different locations?
If so, is it possible (in theory) using TNL to place "master" servers say 20ms/ping apart across the US and have someone in NY who is 30ms away from his closest server have a 50ms ping to the server. While another in CA do the same and be only 50ms away from the same game? Were normally they would be around 80ms over 100ms.
04/02/2007 (9:30 pm)
@ BryceIf I understand your "master" server placement statement correctly. It means that instead of one server hosting one game, it would be two servers hosting one game in 2 different locations?
If so, is it possible (in theory) using TNL to place "master" servers say 20ms/ping apart across the US and have someone in NY who is 30ms away from his closest server have a 50ms ping to the server. While another in CA do the same and be only 50ms away from the same game? Were normally they would be around 80ms over 100ms.
#24
Let me lay the whole scheme out... Think of it like the "many hands make for light work" server model. :)
Keep in mind the situation in which this even becomes an attractive "final" solution is when the packet requirements of your game are such that there is a significant flow of "fluff" data. Several servers along backbones is the n-th logical conclusion of a scalable system. For most indie games, all you'd need would be to split the chat system to an IRC server (resourced around here somewhere), and a little creative scripting to get the gameserver-masterserver-client communication rolling. The one thing you can absolutely, positively guarantee is that the game server itself will send out no data that isn't absolutely essential to sim synchronization.
Assume your game is produced to allow players to host self-servers, in dedicated or non-dedicate mode. You, the game producer, host a "master" server akin to the one hosted by GarageGames, except this master server not only maintains a heartbeat from each server, but also maintains a direct connection to the client. The game servers report in-game non-sim data to the master server, which then disseminates that information to the connected clients.
Master servers can then be placed geographically to service the player base within a specified radius, determined by IP. All IPs on this side of the country go here... other IPs go there... International IPs go here. You can even allow for players to select the master server they want to use, or just use the default "local" server.
It's more complex than the standard client/server model for FPS games... but if you're pushing so much data through your game server that you're starting to explore new networking options... probably best to make sure you totally optimizing what you have before you go mucking about with alternate methodologies.... which was my point in the first place. :)
04/02/2007 (11:51 pm)
@NUTS!: Love the nickname, BTW. One of my favorite legends of WWII.Let me lay the whole scheme out... Think of it like the "many hands make for light work" server model. :)
Keep in mind the situation in which this even becomes an attractive "final" solution is when the packet requirements of your game are such that there is a significant flow of "fluff" data. Several servers along backbones is the n-th logical conclusion of a scalable system. For most indie games, all you'd need would be to split the chat system to an IRC server (resourced around here somewhere), and a little creative scripting to get the gameserver-masterserver-client communication rolling. The one thing you can absolutely, positively guarantee is that the game server itself will send out no data that isn't absolutely essential to sim synchronization.
Assume your game is produced to allow players to host self-servers, in dedicated or non-dedicate mode. You, the game producer, host a "master" server akin to the one hosted by GarageGames, except this master server not only maintains a heartbeat from each server, but also maintains a direct connection to the client. The game servers report in-game non-sim data to the master server, which then disseminates that information to the connected clients.
Master servers can then be placed geographically to service the player base within a specified radius, determined by IP. All IPs on this side of the country go here... other IPs go there... International IPs go here. You can even allow for players to select the master server they want to use, or just use the default "local" server.
It's more complex than the standard client/server model for FPS games... but if you're pushing so much data through your game server that you're starting to explore new networking options... probably best to make sure you totally optimizing what you have before you go mucking about with alternate methodologies.... which was my point in the first place. :)
#25
I do not know what type of connection you are using, but we have testers all around Europe and the US and no one has that much latency with a co-located setup in Stockholm. We had to simulate it for our tests since no one had even close to 200.
04/03/2007 (4:07 am)
Quote:
Us people on the rest of the planet can find some local servers (local to us) but once you get bored with the map selection you look elsewhere for some game variation and then you run into the latency issues.
I do not know what type of connection you are using, but we have testers all around Europe and the US and no one has that much latency with a co-located setup in Stockholm. We had to simulate it for our tests since no one had even close to 200.
#26

Thats a screen shot from a Delta Force Xtreme Demo, with ping values from Australia to various hosted games in US and Europe. The ping values which only show dots mean the ping is over 500ms. On weekends there are a lot more games and I can usual get some 200-250ms games to guys on the West coast of USA. East coast USA is typically 400ms upwards. Europe is usually worse. If I host a game here, guys from the US and Europe don't like joining in because of the high ping from their side again.
I tested my last TGE game with a guy in South Africa and got 800ms ping and the game was unplayable.
I'm using a 512k broadband connection but as explained earlier, that has no impact on ping. I'm also using a 3GHz AMD 64 cpu with two Nvidia 6600 in SLI mode but as explained earlier, that has no impact on ping...
You have to cater for your game being hosted by various players at home because thats where the majority of your hosting will come from, unless you can afford a lot of dedicated servers connected to the internet backbone.
04/03/2007 (5:07 am)

Thats a screen shot from a Delta Force Xtreme Demo, with ping values from Australia to various hosted games in US and Europe. The ping values which only show dots mean the ping is over 500ms. On weekends there are a lot more games and I can usual get some 200-250ms games to guys on the West coast of USA. East coast USA is typically 400ms upwards. Europe is usually worse. If I host a game here, guys from the US and Europe don't like joining in because of the high ping from their side again.
I tested my last TGE game with a guy in South Africa and got 800ms ping and the game was unplayable.
I'm using a 512k broadband connection but as explained earlier, that has no impact on ping. I'm also using a 3GHz AMD 64 cpu with two Nvidia 6600 in SLI mode but as explained earlier, that has no impact on ping...
Quote:co-located setup in StockholmYeah that should help because your co-lated server is wired direct to the internet but I'm talking about games hosted by other players which as you see above, is a different picture to what you and Bryce describe.
You have to cater for your game being hosted by various players at home because thats where the majority of your hosting will come from, unless you can afford a lot of dedicated servers connected to the internet backbone.
#27
Yup, NUTS! is mine too.
Thanks for the info. I recently started reading/studying python to help understed twisted a bit more. I love what Prarie Games does with its servers. However thats not really an FPS game. Would love to see how that server set-up could withstand "stuff" needing be moved/shot around all the time. It sounds like you may have designed something similar using TNL?
04/03/2007 (8:59 pm)
@BryceYup, NUTS! is mine too.
Thanks for the info. I recently started reading/studying python to help understed twisted a bit more. I love what Prarie Games does with its servers. However thats not really an FPS game. Would love to see how that server set-up could withstand "stuff" needing be moved/shot around all the time. It sounds like you may have designed something similar using TNL?
#28
Overcoming latency without sacrifice is impossible. My solution sacrifices money (production side), whereas your solution sacrifices performance (player side). It's more of a "which is better: chocolate or vanilla" discussion now. :)
@NUTS!: If you're project isn't already utilizing Python then TorqueScript will work just as nicely. I think I've been able to boil down what I'm getting at. I've never fully explained the method before so I'm it's still a little rough. Think of it like this...
All commandToServer and commandToClient commands where speed is not of the essence (playing emotes, chat updates, game status updates, weather conditions) are sent via the Master Server... if the Player initiates a commandToServer, it is send to the master server which in turn sends it to the map server. If the map server initiates a commandToClient, it operates in the reverse. For FPS gaming, things like switching weapons and using health kits aren't something you'd want to place within this scheme, because of the added communication time.The end result is connections to the game server now only contain critical game information (projectile transforms, player transforms, etc), while all other information is sidetracked through the master server.
I hadn't considered splitting object types to their own server... but that's a really radical, yet attractive, idea for utilizing a broadband connection. You could have a player server, a projectile server, a vehicle server, etc, with all of the information being re-collated on the client from the various sources. It'd take a rewrite of the front end of GameConnection and some pretty extensive modifications throughout the engine... Damn, I wish I had more time to play with this idea because it's really getting my wheels turning. :)
04/04/2007 (11:52 am)
@Duncan: Just to clarify, you wouldn't be hosting the game servers, players would do that... you'd only be hosting 3-4 "master" servers, but it is a cost none the less. Overcoming latency without sacrifice is impossible. My solution sacrifices money (production side), whereas your solution sacrifices performance (player side). It's more of a "which is better: chocolate or vanilla" discussion now. :)
@NUTS!: If you're project isn't already utilizing Python then TorqueScript will work just as nicely. I think I've been able to boil down what I'm getting at. I've never fully explained the method before so I'm it's still a little rough. Think of it like this...
All commandToServer and commandToClient commands where speed is not of the essence (playing emotes, chat updates, game status updates, weather conditions) are sent via the Master Server... if the Player initiates a commandToServer, it is send to the master server which in turn sends it to the map server. If the map server initiates a commandToClient, it operates in the reverse. For FPS gaming, things like switching weapons and using health kits aren't something you'd want to place within this scheme, because of the added communication time.The end result is connections to the game server now only contain critical game information (projectile transforms, player transforms, etc), while all other information is sidetracked through the master server.
I hadn't considered splitting object types to their own server... but that's a really radical, yet attractive, idea for utilizing a broadband connection. You could have a player server, a projectile server, a vehicle server, etc, with all of the information being re-collated on the client from the various sources. It'd take a rewrite of the front end of GameConnection and some pretty extensive modifications throughout the engine... Damn, I wish I had more time to play with this idea because it's really getting my wheels turning. :)
#29
But if I understand you correctly, your solution actually does not address network latency. It just distributes and shares packet handling away from a probable cheapish home PC to high performance master server(s) so that the home PC hosting the game only has to update all clients with sim specific data. This will improve the responsiveness of the hosted game by a small margin but packet travel time for the sim is unaffected by this solution.
You still have not addressed the fact that if a game is hosted in New York, players in New York have far fewer routers between them and the game server than players from anywhere else. Router handling is what causes network latency.
Option (c) is a small sacrifice in apparent responsiveness but it evens the field for all players who fall within the adjustment range.
Option (b) 's only advantage is to improve weapon accuracy for those players who fall outside of option (c)'s adjustment range.
04/04/2007 (2:27 pm)
@Bryce, if your game is doing well enough to fund that solution then sure, but even successfull companies like Novalogic could only afford to supply a couple of host servers themselves.But if I understand you correctly, your solution actually does not address network latency. It just distributes and shares packet handling away from a probable cheapish home PC to high performance master server(s) so that the home PC hosting the game only has to update all clients with sim specific data. This will improve the responsiveness of the hosted game by a small margin but packet travel time for the sim is unaffected by this solution.
You still have not addressed the fact that if a game is hosted in New York, players in New York have far fewer routers between them and the game server than players from anywhere else. Router handling is what causes network latency.
Option (c) is a small sacrifice in apparent responsiveness but it evens the field for all players who fall within the adjustment range.
Option (b) 's only advantage is to improve weapon accuracy for those players who fall outside of option (c)'s adjustment range.
#30
04/04/2007 (9:50 pm)
Taking a step back, the largest game production houses in the industry have not yet been able to spend enough money to overcome inherent internet latency. Blizzard, SOE, ArenaNET all use localized servers because none are willing to sacrifice performance for latency.
#31
True. But, they also aren't ignoring the effects of latency. WoW and SOE's MMOGs might not have any code to deal with latency issues, because latency isn't nearly as much of an issue in a game which is ticking only a dozen times a second or less.
But, anyone writing a new FPS to be played simultaneously by a dozen or more people is probably investing at least some time considering latency issues, which is the point of Duncan's post, I think.
04/05/2007 (1:21 pm)
Quote:Taking a step back, the largest game production houses in the industry have not yet been able to spend enough money to overcome inherent internet latency. Blizzard, SOE, ArenaNET all use localized servers because none are willing to sacrifice performance for latency.
True. But, they also aren't ignoring the effects of latency. WoW and SOE's MMOGs might not have any code to deal with latency issues, because latency isn't nearly as much of an issue in a game which is ticking only a dozen times a second or less.
But, anyone writing a new FPS to be played simultaneously by a dozen or more people is probably investing at least some time considering latency issues, which is the point of Duncan's post, I think.
#32
Your math is a little off. The message containing the shot is sent to the server which repeats the shot. A Ping is round trip (setver to client back to server) so it is only half their ping, not double that needs to be compensated. Other than that your description is spot on. HalfLife, Quake Unlagged and a few others use this exact technique.
Personally I like slightly modified method, interpolate server time along with the entities positions as they are rendered and return that value to the server with the move. You are then dead on and don't have any inaccuracy due to variable latency. I was going to look at implementing this in Torque with a simple hitscan trace.
The problem with projectiles is that this becomes expensive, essentially for every frame the projectile moves, you need to compensate. You could group projectiles to a player to perform this with less expense. IMHO it would be better to extrapolate the projectiles position based on their latency (ping/2) as that skips the expense completely and gives essentially the same effect.
As to cheating, if you realy think a proxy is a problem then encryption is your only real soliution, it can be broken though by someone willing to spend the time. (Encryption doesn't protect you from the sender or recepient, only from people in the middle).
01/15/2008 (11:20 pm)
Sorry for the very late post but since i was also looking at doing this with Torque and have experience with lag compensation, figured i would add my two cents. I have implemented method b before in other engines.Your math is a little off. The message containing the shot is sent to the server which repeats the shot. A Ping is round trip (setver to client back to server) so it is only half their ping, not double that needs to be compensated. Other than that your description is spot on. HalfLife, Quake Unlagged and a few others use this exact technique.
Personally I like slightly modified method, interpolate server time along with the entities positions as they are rendered and return that value to the server with the move. You are then dead on and don't have any inaccuracy due to variable latency. I was going to look at implementing this in Torque with a simple hitscan trace.
The problem with projectiles is that this becomes expensive, essentially for every frame the projectile moves, you need to compensate. You could group projectiles to a player to perform this with less expense. IMHO it would be better to extrapolate the projectiles position based on their latency (ping/2) as that skips the expense completely and gives essentially the same effect.
As to cheating, if you realy think a proxy is a problem then encryption is your only real soliution, it can be broken though by someone willing to spend the time. (Encryption doesn't protect you from the sender or recepient, only from people in the middle).
#33
I have not bothered implementing this in TGE because I have been playing Crysis and by comparison, TGE really sucks and I'm somewhat demotivated with trying to compete on that level with this tool. I think TGE is better suited to games which have Wii type graphics, i.e. cute cartoonish look, and if you could make an FPS like that and make it popular enough to attract many multiplayers to justify your efforts finacially then there might be a place for network lag compensation in TGE.
But that's just my opinion in the light of Crysis withdrawal symptoms :)
01/16/2008 (3:08 pm)
Nice to hear from you Danni and your maths is probably correct since I had not given it that fine a degree of attention.I have not bothered implementing this in TGE because I have been playing Crysis and by comparison, TGE really sucks and I'm somewhat demotivated with trying to compete on that level with this tool. I think TGE is better suited to games which have Wii type graphics, i.e. cute cartoonish look, and if you could make an FPS like that and make it popular enough to attract many multiplayers to justify your efforts finacially then there might be a place for network lag compensation in TGE.
But that's just my opinion in the light of Crysis withdrawal symptoms :)
#34
01/16/2008 (4:35 pm)
I played Crysis also and while i was amazed at the detail, there were some aspects of gameplay that turned me off in many aspects. BattleField, COD and many others are just the same. I still play Doom II online and Day of Defeat (Steam/Valve) because there is nothing like it out there. Most of the developers seam to have this weird perversion for realism and spend more time on that than creating enjoyable multi player game play. At least that is my opinion.
#35
Obviously you'll want to raycast to make sure you're not creating them inside walls, and clamp it so they dont start too far away.
And as you said, the best way to lag comp hitscan weapons is to store a history of eligible objects positions on the server and look up via ping, it's used in the source engine.
01/30/2008 (10:25 am)
Regarding projectiles, all you need to do to lag compensate them is create them at firepos + firevec * speed * pingtime.Obviously you'll want to raycast to make sure you're not creating them inside walls, and clamp it so they dont start too far away.
And as you said, the best way to lag comp hitscan weapons is to store a history of eligible objects positions on the server and look up via ping, it's used in the source engine.
Torque Owner Duncan Gray
Perhaps my 150ms is too high, I have not tested it, but I did not cast it in concrete either, perhaps 100ms is better. The point being its the willingness to equalize the time-frame upon which game conditions are evaluated which needs to be addressed.
Most game engines including TGE take the easy route of "server-time is king!" but thats merely a compromise which can so easily be reduced by option (c)