Game Development Community

Profiling memory usage

by Manoel Neto · in Torque 3D Professional · 08/26/2010 (1:30 pm) · 5 replies

I'm working on a game that "spawns" dedicated servers on demand to create instances for each group of players. The problem is that the dedicated server is using quite a bit of RAM, and I need to find ways to bring it down so we can increase the number of instances per server blade.

Since there is a ton of data/assets that never change and are loaded with every instance, my first idea is to make them be stored in shared memory. But before I set out randomly changing memory allocation code to use shared memory, it would be wise to profile the memory and know where the allocations are coming from: how much RAM is being used by scripts, animation data, collision data, etc. Where should I look to do this? The Torque Memory Manager seems broken (I get tons of compilation errors if I enable it), is there another way?

#1
08/26/2010 (1:31 pm)
Ram is cheap, buy bigger machines?

www.iceisfun.com/48_128.png
#2
08/26/2010 (1:57 pm)
/wipes drool

Right now each instance is using 120MBs of RAM. I believe that is a bit too much for a game running without graphics. I would need 100 128GB servers to handle 100K instances, and those ain't cheap to rent.
#3
08/26/2010 (9:00 pm)
Hummm.... 120MBs does sound like alot. I'm sure there is something that the server is loading that it shouldn't be.

What i would do is re-enable the old Torque Memory Manager and do a dump of allocations and look for the big ones.

For the future we're looking to allow a single Torque process to contain multiple game servers. This should allow resources like collision meshes and datablocks to be shared between multiple server instances reducing overhead.
#4
08/26/2010 (9:29 pm)
Quote:What i would do is re-enable the old Torque Memory Manager and do a dump of allocations and look for the big ones.
I tried, but it generates far too much compilation errors for my liking.

The game I'm working on uses a lot of animations (we have over 400), and I already re-wrote the sequence loading code so they are shared across all characters by storing sequences inside a Resource (the memory usage was much, much worse when the key data was stored inside each TSShape).

I created a few functions for keeping track of allocations in the animation system and we have around 50MBs worth of animation data, which will be the first thing to go into shared memory.

I also suspect the dedicated server is loading/allocating stuff a dedicated server has no business doing, but I need to investigate. Anyway I'm thinking of a way of hooking shared memory inside the resource manager, so I can do mass sharing of resources (which would cause massive speed ups when spawning new servers).

Quote:For the future we're looking to allow a single Torque process to contain multiple game servers. This should allow resources like collision meshes and datablocks to be shared between multiple server instances reducing overhead.
The way things work right now is a bit convenient for us: since each instance is a separate process, the servers make full use of multiple cores/CPUs, if an instance crashes it doesn't brings the entire server down and we can distribute load across multiple servers easily.
#5
08/26/2010 (11:33 pm)
It would be very nice to see a resource from TP outling the core components a dedicated server needs. We only run the one server at the moment but will have similar issues to Manoel down the track.