Changing TickMS recommended?
by Sean H. · in Torque 3D Professional · 04/24/2010 (11:37 am) · 5 replies
Bit of background
As far as I can remember, torque has always used a tick rate of every 32 milliseconds. I'm not sure why the engine uses this as the value, or how deeply integrated it is into the core engine code(I see a #define for it), but after doing a bit of research on the forums, I see it's never really been explained why the value is 32ms or how changing it would impact performance and stability.
TickMS is the duration of simulation time which elapses between process ticks and server updates, and its the amount of time the engine uses to calculate new(extrapolated) states for dynamic objects. when a new state is calculated, the engine interpolates between the new state and the old state until 32 ms pass and a new state is calculated.
If after 32ms the objects interpolated state is reasonably close to the last extrapolated object state, a new state is calculated and used for interpolation. If the objects current and last extrapolated states are too far apart, the object goes into "warp" mode where the next tick is used to slightly nudge the object to its destination state before extrapolating a new state.
The questions
The first question is simple. Considering the above possibly flawed understanding of how things work, what happens when the frametime is substantially greater than 32ms? Some systems are hard-coded to extrapolate 32ms into the future and thus, will never be correct if the frame time is 60ms(15fps). AFAIK, theres no auto-adjustment made for low framerates. The system would seem to somewhat breakdown when the frame rate is less than 30 fps always warping every frame(possible reason for jitter?). the engine seems like it was designed with the assumption that the frametime will always be substantially less than 32 ms allowing for at least 1 interpolation.
I'm considering TickMS as a potential area for optimization. So the question is: what would be the impact of changing this value? Are developers encouraged to tweak this value for their individual target requirements? If TickMS was set to say, 60ms, then even at 30fps you would be getting 2 frames rendered per server tick allowing for at least 1 interpolation frame with no warping. Also, increasing TickMS will also significantly decrease the frequency at which physics updates are done, further optimizing performance...theoretically.
As an aside, I believe unreal engine(udk) uses a tick period of 100 ms.
As far as I can remember, torque has always used a tick rate of every 32 milliseconds. I'm not sure why the engine uses this as the value, or how deeply integrated it is into the core engine code(I see a #define for it), but after doing a bit of research on the forums, I see it's never really been explained why the value is 32ms or how changing it would impact performance and stability.
TickMS is the duration of simulation time which elapses between process ticks and server updates, and its the amount of time the engine uses to calculate new(extrapolated) states for dynamic objects. when a new state is calculated, the engine interpolates between the new state and the old state until 32 ms pass and a new state is calculated.
If after 32ms the objects interpolated state is reasonably close to the last extrapolated object state, a new state is calculated and used for interpolation. If the objects current and last extrapolated states are too far apart, the object goes into "warp" mode where the next tick is used to slightly nudge the object to its destination state before extrapolating a new state.
The questions
The first question is simple. Considering the above possibly flawed understanding of how things work, what happens when the frametime is substantially greater than 32ms? Some systems are hard-coded to extrapolate 32ms into the future and thus, will never be correct if the frame time is 60ms(15fps). AFAIK, theres no auto-adjustment made for low framerates. The system would seem to somewhat breakdown when the frame rate is less than 30 fps always warping every frame(possible reason for jitter?). the engine seems like it was designed with the assumption that the frametime will always be substantially less than 32 ms allowing for at least 1 interpolation.
I'm considering TickMS as a potential area for optimization. So the question is: what would be the impact of changing this value? Are developers encouraged to tweak this value for their individual target requirements? If TickMS was set to say, 60ms, then even at 30fps you would be getting 2 frames rendered per server tick allowing for at least 1 interpolation frame with no warping. Also, increasing TickMS will also significantly decrease the frequency at which physics updates are done, further optimizing performance...theoretically.
As an aside, I believe unreal engine(udk) uses a tick period of 100 ms.
#2
1) Increased input lag (due to interpolation).
2) Decreased perceived general physics accuracy (because simulation only happens at each tick, and in-between frames are interpolated).
You'll only see an increase in performance if your game is CPU bottlenecked (aka: physics, AI, too much scripts going on). Increasing TickMS is very usefull if you're running a MMO or something server-side, since you can pretty much halve your CPU usage by increasing TickMS to 64. It also reduces the amount of bandwidth used by your game, of course.
04/26/2010 (10:09 am)
Frames are interpolated between the previous tick state, and the new tick state, not extrapolated. The adverse effects of increasing TickMS are:1) Increased input lag (due to interpolation).
2) Decreased perceived general physics accuracy (because simulation only happens at each tick, and in-between frames are interpolated).
You'll only see an increase in performance if your game is CPU bottlenecked (aka: physics, AI, too much scripts going on). Increasing TickMS is very usefull if you're running a MMO or something server-side, since you can pretty much halve your CPU usage by increasing TickMS to 64. It also reduces the amount of bandwidth used by your game, of course.
#3
Yes, this is true. now, consider the case where the engine takes 66ms to render a frame(15fps). In this case, every frame the engine will see that a tick has passed and attempt to extrapolate a new state. thus, for a framerate <= 30 fps, theres no interpolation taking place. right?
furthermore, since the physics calculations assume 32ms between updates, the simulation will never be correct for framerates lower than 30 fps.
logic seems to indicate that for the engine to operate correctly, tickMS would need to be at least twice the average frame time.
04/26/2010 (5:58 pm)
Quote:
Frames are interpolated between the previous tick state, and the new tick state, not extrapolated.
Yes, this is true. now, consider the case where the engine takes 66ms to render a frame(15fps). In this case, every frame the engine will see that a tick has passed and attempt to extrapolate a new state. thus, for a framerate <= 30 fps, theres no interpolation taking place. right?
furthermore, since the physics calculations assume 32ms between updates, the simulation will never be correct for framerates lower than 30 fps.
logic seems to indicate that for the engine to operate correctly, tickMS would need to be at least twice the average frame time.
#4
Anyway... I don't personally see any reason that a low framerate would break this system. Even at high framerates, I don't believe the engine forces any frame to occur during a sim tick, so it seems that frames should essentially always fall between two ticks except in a very lucky random case where they fire at exactly the same time. At a totally arbitrary framerate, say 47, the chances of a frame being rendered during a sim tick become very low. Unless the engine has some frametime lock I'm not aware of, this is the standard scenario. I believe that prediction is what takes over here, with the client always rendering additional time for each frame based on the existing "move" data (actually, maybe just the last velocity values) in each object, appropriate interpolation for animations, and so on. The fact that multiple sim ticks may have occured between two frames doesn't seem like it could break this. If the frame renders anywhere between lastTick and lastTick+31, it'll predict. If we make it to lastTick+32 without a frame render, a new tick is generated.
Not actually having explored how this is really done, I suppose it's also possible that the sim always runs a tick ahead, but again I see no reason this would break at low framerates. At time T:0, we do a sim for T:32 and render a frame based 100% on the initial sim state. At T:32 we do a sim for T:64. At T:48 we render a frame using a 50% interpolation between T:32 and T:64. At T:64 we sim T:96. At T:80 we render a frame using 50% interp of T:64 and T:96. In this scenario, a frametime of 48ms doesn't seem to be breaking anything. I don't believe this is how the system actually works, but I really don't feel like digging through the sim and render systems to find out.
04/27/2010 (7:54 pm)
If you're expecting your players to be operating at sub-30fps (???) or running a game type that doesn't require high-frequency updates (say, a WOW style MMO), I suppose there could be some argument for lowing the tick rate, however you will still be dealing with a substantially reduced physics resolution. Among other things, projectiles will ghost later, vehicles will be more likely to pass through objects, and players may actually be more likely to jitter when colliding.Anyway... I don't personally see any reason that a low framerate would break this system. Even at high framerates, I don't believe the engine forces any frame to occur during a sim tick, so it seems that frames should essentially always fall between two ticks except in a very lucky random case where they fire at exactly the same time. At a totally arbitrary framerate, say 47, the chances of a frame being rendered during a sim tick become very low. Unless the engine has some frametime lock I'm not aware of, this is the standard scenario. I believe that prediction is what takes over here, with the client always rendering additional time for each frame based on the existing "move" data (actually, maybe just the last velocity values) in each object, appropriate interpolation for animations, and so on. The fact that multiple sim ticks may have occured between two frames doesn't seem like it could break this. If the frame renders anywhere between lastTick and lastTick+31, it'll predict. If we make it to lastTick+32 without a frame render, a new tick is generated.
Not actually having explored how this is really done, I suppose it's also possible that the sim always runs a tick ahead, but again I see no reason this would break at low framerates. At time T:0, we do a sim for T:32 and render a frame based 100% on the initial sim state. At T:32 we do a sim for T:64. At T:48 we render a frame using a 50% interpolation between T:32 and T:64. At T:64 we sim T:96. At T:80 we render a frame using 50% interp of T:64 and T:96. In this scenario, a frametime of 48ms doesn't seem to be breaking anything. I don't believe this is how the system actually works, but I really don't feel like digging through the sim and render systems to find out.
#5
I think i get what youre saying here. at T:48, a new tick is produced for interpolating between T:32 and T:64. The difference between 48 and 32 would be used to correctly interpolate the next state. if 100 ms go by from one frame to the next, 3 ticks will be produced based on the current object state and applied in succession. the interpolation would set the current object state somewhere between the last tick and the one before last based on 100ms-96ms = 4ms interpolation. in this sense, 32 ms isn't the actual frametime, its an upper bound on the resolution of object processing.
as you mentioned Henry, I do think there can be a case for increasing TickMS for performance improvements esp. in single player games.
05/01/2010 (8:57 am)
Quote:
At time T:0, we do a sim for T:32 and render a frame based 100% on the initial sim state. At T:32 we do a sim for T:64. At T:48 we render a frame using a 50% interpolation between T:32 and T:64. At T:64 we sim T:96. At T:80 we render a frame using 50% interp of T:64 and T:96. In this scenario, a frametime of 48ms doesn't seem to be breaking anything.
I think i get what youre saying here. at T:48, a new tick is produced for interpolating between T:32 and T:64. The difference between 48 and 32 would be used to correctly interpolate the next state. if 100 ms go by from one frame to the next, 3 ticks will be produced based on the current object state and applied in succession. the interpolation would set the current object state somewhere between the last tick and the one before last based on 100ms-96ms = 4ms interpolation. in this sense, 32 ms isn't the actual frametime, its an upper bound on the resolution of object processing.
as you mentioned Henry, I do think there can be a case for increasing TickMS for performance improvements esp. in single player games.
Torque 3D Owner Sean H.
changing TickMS would not affect the actual framerate, nor would it affect the speed of the game at all. I think that changing the TickMS would only adjust how the physics calculations and processTick calculations are done. Ideally if you increase TickMS, the performance of the game should increase without much else being affected. more of the games frames would go toward interpolation and less frames would be dedicated to physics updates. However, every physics update would accurately extrapolate the new states as long as TickMS is at least double the average frametime allowing for at least 1 interpolation frame.
I can't think of any negative impact of modestly increasing TickMS.