Game Development Community

100% CPU Usage Normal with Dedicated

by Eric Clausing · in Torque Game Engine · 10/11/2006 (4:26 pm) · 20 replies

I have tried this on both Gentoo and now Ubuntu and whenever I run the compiled dedicated server I get 100% CPU Usage. Is that normal? I am using the CVS version of Torque and this is without any changes to code.

Thanks

Claus

#1
10/12/2006 (6:49 pm)
Does anyone else have this problem? Running 100% without any code changes.
#2
10/12/2006 (10:16 pm)
I'm not sure it's supposed to take any less than that, much like the game takes 100% of the CPU on Windows. Looking at the code, it creates the socket in non-blocking mode, and doesn't use select().

You can easily make it take less by, say, adding a call to usleep(10000), for example in Net::process(), right before you enter the for(;;) loop. Look in ./platformX86UNIX/x86UNIXNet.cc line 415 or thereabouts.
#3
10/12/2006 (10:30 pm)
Hardly makes a peep here, your starting dedicated but are you doing
make dedicated
To build it?
Also are you building debug or release modes?
#4
10/13/2006 (5:27 am)
As far as I recall, the server does a check on its 'sleep latency' and decides whether it's low enough to allow a sleep on each cycle. Just checking.. yeah, have a look in Platform::init() in x86UNIXWindow.cc

My server was always failing this test, and therefore running at 100% CPU. I simply overrode this and forced it to use sleep. CPU load dropped to about 5% I think. My game is turn-based however, the latency check was obviously made with twitch-games in mind...
#5
10/13/2006 (8:12 am)
@Dreamer I am doing a make dedicated. Perhaps it does have something to do with what the others are saying. I'll take a look.
#6
10/13/2006 (11:41 am)
Ok, looking a little more at x86UNIXWindow.cc, it seems as if you can force sleeping on in distributed mode by passing "-dsleep" on the command line.
#7
01/26/2007 (7:19 am)
I know this thread is old but I am experiencing the exact same problem now. I built torque 1.4.2 under Debian 4.0 RC1 (Etch) using make dedicated. Both DEBUG and RELEASE use up 97%-99% cpu resources when I run it with ./torqueDemod -dedicated -dsleep -mission starter.fps/data/missions/stronghold.mis.

Any pointers to a fix to this? This is a big problem for me...
#8
01/26/2007 (12:30 pm)
You mean high cpu usage when there's someone playing or when there's no players?
#9
01/26/2007 (1:07 pm)
Try leaving off the -dsleep argument see if that helps
#10
01/26/2007 (1:41 pm)
The only build I have ever seen run great in linux is Dreamer and Co's MMOKit. It sits idle all the time and never makes a sound when no one is using it.

All other standard builds are pushing 99%.

Did you do anything Dreamer specific to Linux to make things better?

This is TheClaus BTW if you were wondering.
#11
01/26/2007 (1:48 pm)
Not really anything Linux specific, we've tried at all times to make sure anything we do does it pretty much the same for windows, linux and mac.

One thing the kit does to help it idle well is change up the AI system completely. The AI sleeps until it is awoken by an external event. We also added a timer event to wake the AI up periodically to look around for lack of a better word.

Regards,
Dreamer
#12
01/27/2007 (2:20 am)
@David: High CPU usage when nobody is on the server, too. I KNOW we had it working last year in the summer.. we made a dedicated built and even with people on it the server took roughly 7-13% if there was action.
#13
01/27/2007 (10:36 am)
Did you try renicing the process?

Torque is a game engine, and is going to take all of the cpu that's available by default. Nothing -really- in the dedicated build is going to over-ride that by default...the "dedicated" build is an example implementation, not something we expect people to use for what should be a highly designed implementation (dedicated servers).
#14
01/27/2007 (10:59 am)
Yeah, I tried renicing.

Well, the documents don't say you guys don't expect us to use it. :) Of course I browsed the code and the dedicated built pretty much excludes all the graphics stuff among some other things.

I just wonder why it worked when I last tried when CVS was still up. We had 3-4 instances of tge running on our server, each of them using no cpu cycles when there was nobody on them. It raised to about 14% when the team joined one server.
#15
01/27/2007 (2:14 pm)
Oliver asked me to take a look at this, and I found and fixed the problem.

The short version of the problem is that TGE was calling the nanosleep() function incorrectly. I don't know if the $Pref::BackgroundSleepTime variable has been in for a while, but it's present in 1.4.2. The Unix platform code grabs the value of that variable and multiplies against a constant to get the number of nanoseconds to sleep. So far so good, but the problem is that nanosleep() takes two parameters ... seconds and nanoseconds. The above $pref variable defaults to 3000ms (aka 3 seconds) but the code that calls nanosleep doesn't convert from nanoseconds to seconds AND nanoseconds. Net result of the bug is zero sleeping at all.

Adding code to do the conversion and call nanosleep correctly results in the old, correct behavior. Using the code below, I was easily able to change the BackgroundSleepTime to 10ms while maintaining minimal (0.3%) cpu utliization on my laptop running Ubuntu Edgy. Note that the granularity of the linux kernel will vary, so values below 10-20ms may not work reliably. YMMV, so experiment with numbers that work best.

There's two places (one for dedicated servers, one for regular clients) in engine/platformX86UNIX/x86UNIXWindow.cc where you want to replace:

Sleep(0, getBackgroundSleepTime() * 1000000);

with

S32 sleepTime = 0;
S32 sleepSeconds = 0;
S32 sleepNanoseconds = 0;
sleepTime = getBackgroundSleepTime();
sleepSeconds = sleepTime / 1000;
sleepNanoseconds = (sleepTime % 1000) * 1000000;
Sleep(sleepSeconds, sleepNanoseconds);

This should work, and allows you to leverage the $pref variable that was added to tweak sleep times at runtime. Pretty cool functionality, just wasn't tested properly on linux I guess.

This fix wasn't heavily tested, but works for me, so be sure to test before you use it heavily :)
#16
01/27/2007 (11:44 pm)
Hey that's great and it works! Thanks! :)
#17
01/31/2007 (6:20 pm)
Issue #2613 - noted! Thanks for the write-up - very very helpful!
#19
03/14/2007 (2:52 pm)
Hi Everyone,

:. Another option is replacing Sleep function by the following code:

FILE: engine/platformX86UNIX/x86UNIXWindow.cc
LINE: ~354
static inline void Sleep([b]U32[/b] secs, [b]U32[/b] nanoSecs)
{
   [b]U32 onebillion = 1000000000;[/b]
   timespec sleeptime;

   [i]// Constraint: 1 Billion is the max. value nanoSecs can take, so if it is 
   //             the case break it into seconds and update TIMESPEC accordingly[/i]
   [b]if( nanoSecs > onebillion ) {
     int s;
     nanoSecs -= (s = nanoSecs / onebillion) * onebillion;
     secs += s;
   }[/b]
   sleeptime.tv_sec = secs;
   sleeptime.tv_nsec = nanoSecs;
   nanosleep(&sleeptime, NULL);
}

:. Many thanks for point it out where to find the problem.
#20
03/17/2007 (7:32 am)
Super - thanks for this. Works here too