Duel core amd's and torque
by arteria3d · in General Discussion · 09/22/2006 (4:58 am) · 48 replies
Hi,
Dont know whether anyone else running a duel core machine has noticed this, but when you run a torque level it takes around 2-3 minuted for the level to settle down to a descent FPS.
When i first run torque, the fps is no more than 3, then after a few minutes the fps goes right up to a normal level for my paticular level.
Apparantly doing some research a lot of games are having problems with AMD and also intel duel core machines, but there are fixes.
Has anyone else experienced this?
Steve
www.arteria-gaming.com
Dont know whether anyone else running a duel core machine has noticed this, but when you run a torque level it takes around 2-3 minuted for the level to settle down to a descent FPS.
When i first run torque, the fps is no more than 3, then after a few minutes the fps goes right up to a normal level for my paticular level.
Apparantly doing some research a lot of games are having problems with AMD and also intel duel core machines, but there are fixes.
Has anyone else experienced this?
Steve
www.arteria-gaming.comAbout the author
Owner of uk based Ltd company ArteriaMediaLtd. with a trading name of Arteria3d Website;arteria3d.com
#3
However, the easier solution is to add a call to SetProcessAffinityMask() somewhere early in your program (ideally, in some DllMain). This will avoid any multi-core timing difference problems.
09/22/2006 (7:55 am)
The specific drivers are the motherboard/CPU drivers; the update is available from AMD.However, the easier solution is to add a call to SetProcessAffinityMask() somewhere early in your program (ideally, in some DllMain). This will avoid any multi-core timing difference problems.
#4
Not being that knowledgeable of programming with torque, where would i actually add that?
Thanks for replying
Steve
09/22/2006 (8:04 am)
Hi,Not being that knowledgeable of programming with torque, where would i actually add that?
Thanks for replying
Steve
#5
09/22/2006 (8:10 am)
Just as a note. If you are searching for "duel core", you should be searching for "dual core". You will get much better results. While some processors do duke it out internally, that's not what you're looking for. It will help your search results to have the correct spelling and terminology (especially since both are actual words).
#6
What worked for me was a torque rebuild, turning off the high-resolution timer. I think it only took a #define directive. Unfortunately, that means other Torque 1.4 games run HORRIBLY on my machine.
Even more strange, this problem seems unique to TGE. TGB doesn't seem to have this problem. Go figure.
I wish this could be set as a preference or something in Torque - it needs to be fixed as dual-core machines become more popular.
09/22/2006 (1:59 pm)
Actually, the AMD patch didn't work for Torque.What worked for me was a torque rebuild, turning off the high-resolution timer. I think it only took a #define directive. Unfortunately, that means other Torque 1.4 games run HORRIBLY on my machine.
Even more strange, this problem seems unique to TGE. TGB doesn't seem to have this problem. Go figure.
I wish this could be set as a preference or something in Torque - it needs to be fixed as dual-core machines become more popular.
#7
09/22/2006 (2:09 pm)
You can turn the #ifdef into an if(), and set a global if you detect multiple cores.
#8
Theproblem here is im not an advanced programmer. Are these fixes had to implement?
Steve
09/23/2006 (4:14 am)
Hi,Theproblem here is im not an advanced programmer. Are these fixes had to implement?
Steve
#9
This has been disgusted many times and each time someone creates a thread I tell them this:
It's related to Torques high precision timer - force the game to one thread (processor affinity) and it'll be fine - that or force the game to GetTick().
My current plan is to have the app force itself to one CPU - or use the slower GetTick system (as compared to the "high-resolution timer" mentioned above. This could be an option in your game's preferences... but I would just force it to one thread. When it comes time for me to deal with the problem I will post what I did as a resource. Until then, good luck.
09/24/2006 (7:41 am)
Yeah, updating the drivers helps the speed problem but does not help the map lag.This has been disgusted many times and each time someone creates a thread I tell them this:
It's related to Torques high precision timer - force the game to one thread (processor affinity) and it'll be fine - that or force the game to GetTick().
My current plan is to have the app force itself to one CPU - or use the slower GetTick system (as compared to the "high-resolution timer" mentioned above. This could be an option in your game's preferences... but I would just force it to one thread. When it comes time for me to deal with the problem I will post what I did as a resource. Until then, good luck.
#10
Note that the millisecond timer will lose time over time when bus traffic is high, due to a well-known hardware/software interaction problem that's been present for many years.
09/24/2006 (8:48 am)
GetTickCount() has a resolution of 10-20 milliseconds. If you run faster than about 50 Hz, it'll start causing problems in rendering. You really should use timeGetTime() instead, after calling timeBeginPeriod(2), to get the millisecond timer, if that's the way you want to go.Note that the millisecond timer will lose time over time when bus traffic is high, due to a well-known hardware/software interaction problem that's been present for many years.
#11
http://garagegames.com/mg/forums/result.thread.php?qt=35267
The solution (for me, with admitted problems with GetTickCount()) was to change what's on line 1238 in my winWindow.cc.
I changed it from:
I then went through all areas where GetTickCount() is called, and changed the call to timeGetTime();
Both functions are supposed to return time in milliseconds. I don't know why GG / Dynamix chose GetTickCount over timeGetTime - maybe the latter function didn't exist until more recently. Anyway, that change will hopefully give you the best of both worlds.
Or again, you can just set the thread affinity. However, my concern (based more on my own ignorance than anything else) is if that might prevent Windows from moving sound / music processing to a different CPU / core, thus preventing the game from running as fast as it could on dual-core / dual-CPU machines. Not that it probably matters.
timeGetTime apparently has it's own resolution issues - much better than GetCurTicks, but it's still a limit (around 1/200th of a second, or 5 ms, depending upon the machine). If that's really an issue, then you may want to force thread affinity instead.
09/24/2006 (4:43 pm)
Here's the thread:http://garagegames.com/mg/forums/result.thread.php?qt=35267
The solution (for me, with admitted problems with GetTickCount()) was to change what's on line 1238 in my winWindow.cc.
I changed it from:
mUsingPerfCounter = QueryPerformanceFrequency((LARGE_INTEGER *) &mFrequency);to this:
mUsingPerfCounter = false;
I then went through all areas where GetTickCount() is called, and changed the call to timeGetTime();
Both functions are supposed to return time in milliseconds. I don't know why GG / Dynamix chose GetTickCount over timeGetTime - maybe the latter function didn't exist until more recently. Anyway, that change will hopefully give you the best of both worlds.
Or again, you can just set the thread affinity. However, my concern (based more on my own ignorance than anything else) is if that might prevent Windows from moving sound / music processing to a different CPU / core, thus preventing the game from running as fast as it could on dual-core / dual-CPU machines. Not that it probably matters.
timeGetTime apparently has it's own resolution issues - much better than GetCurTicks, but it's still a limit (around 1/200th of a second, or 5 ms, depending upon the machine). If that's really an issue, then you may want to force thread affinity instead.
#12
To get better precision with timeGetTime(), try calling timeBeginPeriod(2). (curously, "1" isn't as good as "2"). This will give you 2 ms resolution, more or less.
09/24/2006 (9:05 pm)
TimeGetTime() reads a separate counter than GetTickCount() (basically, I believe the GetTickCount() value gets incremented each time there's a context switch interrupt -- but I could be wrong).To get better precision with timeGetTime(), try calling timeBeginPeriod(2). (curously, "1" isn't as good as "2"). This will give you 2 ms resolution, more or less.
#13
09/24/2006 (9:23 pm)
... and just incase you weren't having enough fun yet, make sure to match all your timeBeginPeriod(2); calls with timeEndPeriod(2); calls
#14
Is this issue something that GG are going to try and overide in torque, or do you think we will always have these problems for the forthcoming future? because the problem here, is that the changes outlined i cannot implement because i only have limited skills. This is a little frustrating to say the least.
09/25/2006 (1:08 am)
Thanks for all the replies.Is this issue something that GG are going to try and overide in torque, or do you think we will always have these problems for the forthcoming future? because the problem here, is that the changes outlined i cannot implement because i only have limited skills. This is a little frustrating to say the least.
#15
09/25/2006 (8:17 am)
I sure hope they do! More and more people are having this problem with Torque and the demos. It's a pretty simple fix (I promptly forgot how I did it after I made the changes many months ago), but pretty critical. BTW, I *HAVE* experienced the problem with TGB now. It seemed to work fine when I was running directly from the builder tool, but when I created a stand-alone version, I started getting horrible jerkiness and a framerate of something like 4 frames per second. Very ugly on my dual-core machine (it ran fine on older machines). So I have to make the changes to TGB as well.
#16
I found this on a weblog regarding timeBeginPeriod and timeEndPeriod:
"This has a number of side effects - it increases the responsiveness of the system to periodic events (when event timeouts occur at a higher resolution, they expire closer to their intended time). But that increased responsiveness comes at a cost - since the system scheduler is running more often, the system spends more time scheduling tasks, context switching, etc. This can ultimately reduce overall system performance, since every clock cycle the system is processing "system stuff" is a clock cycle that isn't being spent running your application. For some multimedia applications (video, for example) the increased system responsiveness is worth the system overhead (for instance, if you're interested in very low latency audio or video, you need the system timers to run at a high frequency)."
I am curious what problems just using GetTickCount() can cause? I don't think timeGetTime() is any better unless you mess around with the timeBeginPeriod and timeEndPeriod stuff... and from today's experiment on our servers that seems to cause some really bizarre "catch up behavior" across all processes?!?!?!
09/25/2006 (9:04 pm)
Alright, we were having really weird issues with the servers "going idle" today. Where for periods of 30-45 seconds the CPUs would just suddenly all drop to 0% usage on all processes and stall the entire system.I found this on a weblog regarding timeBeginPeriod and timeEndPeriod:
"This has a number of side effects - it increases the responsiveness of the system to periodic events (when event timeouts occur at a higher resolution, they expire closer to their intended time). But that increased responsiveness comes at a cost - since the system scheduler is running more often, the system spends more time scheduling tasks, context switching, etc. This can ultimately reduce overall system performance, since every clock cycle the system is processing "system stuff" is a clock cycle that isn't being spent running your application. For some multimedia applications (video, for example) the increased system responsiveness is worth the system overhead (for instance, if you're interested in very low latency audio or video, you need the system timers to run at a high frequency)."
I am curious what problems just using GetTickCount() can cause? I don't think timeGetTime() is any better unless you mess around with the timeBeginPeriod and timeEndPeriod stuff... and from today's experiment on our servers that seems to cause some really bizarre "catch up behavior" across all processes?!?!?!
#17
I'm not using timeBeginPeriod or timeEndPeriod in conjunction with timeGetTime(), but I'm working on a single-player game, and haven't left it running for more than an hour or so at this point. So maybe there's something I am not seeing on my end.
09/26/2006 (9:10 am)
On the server, I don't know if the loss of resolution from GetTickCount() is going to be any big deal - you are only talking a resolution of something like 20 cycles per second.I'm not using timeBeginPeriod or timeEndPeriod in conjunction with timeGetTime(), but I'm working on a single-player game, and haven't left it running for more than an hour or so at this point. So maybe there's something I am not seeing on my end.
#18
09/27/2006 (2:57 pm)
Just a note, i run an AMD dual core and have been working with TGE TLK and TSE for some months now, I think I may have experienced a similar problem, I really only tend to see it when I've cleared dso's and it has to recompile them. And even then its only sometimes, like if i'm using a bit of scripting it isnt so fond of.
#19
09/27/2006 (3:21 pm)
I've got an FX-62 and am working with TSE and am not saving compiled content, and do not experience this at all.
#20
09/27/2006 (5:01 pm)
No, im not seeing it in TSE either, yet, but i'm not ever deleting the dso's yet either
Torque Owner Matt Vitelli