Memory footprint in T2d
by Simon Love · in Torque Game Builder · 07/13/2005 (8:43 pm) · 6 replies
Greetings,
While working on my t2d project, I had the CRAZY idea to check the memory footprint of the game.
It was about 34 megs, with all sprites loaded, maps and all.
I thought it was a bit much for the scope of my game, so I fired up some other games just to compare.
The Fish demo for t2d eats up about 64 megs or Ram.
Gold digger, the first 'commercial' T2D game, and a pretty sweet puzzler, occupies 147 megs of memory.
Xeno assault 2, which is not T2d, but a cool top-down shooter, occupied from 140 to 170 megs of ram.
I have 768 megs of ram and shouldn't complain, I know.
I also know you can get 256 megs for pocket change these days.
BUT
Is it worth it to take the time to delete all gui screens from memory when switching screens?
Is it better to load exactly what you need, and wipe the RAM clean every time those needs change?
It would include more work, for sure, but is it worth it?
I've always thought that you had to use ressources as sparingly as possible, code as efficiently as possible.
So, am I just going completely insane? Or is memory conservation completely irrelevant on modern PCs?
While working on my t2d project, I had the CRAZY idea to check the memory footprint of the game.
It was about 34 megs, with all sprites loaded, maps and all.
I thought it was a bit much for the scope of my game, so I fired up some other games just to compare.
The Fish demo for t2d eats up about 64 megs or Ram.
Gold digger, the first 'commercial' T2D game, and a pretty sweet puzzler, occupies 147 megs of memory.
Xeno assault 2, which is not T2d, but a cool top-down shooter, occupied from 140 to 170 megs of ram.
I have 768 megs of ram and shouldn't complain, I know.
I also know you can get 256 megs for pocket change these days.
BUT
Is it worth it to take the time to delete all gui screens from memory when switching screens?
Is it better to load exactly what you need, and wipe the RAM clean every time those needs change?
It would include more work, for sure, but is it worth it?
I've always thought that you had to use ressources as sparingly as possible, code as efficiently as possible.
So, am I just going completely insane? Or is memory conservation completely irrelevant on modern PCs?
About the author
I am here to help. I've worked at every imaginable position in game development, having entered the field originally as an audio guy.
#2
I'm just thinking that T2d gives us a chance to make awesomely slick games without straining the hardware too much, so that pretty much anyone can play. I understand it is a valid goal to try to push the modern PCs to the limit; everybody I know who programs has a lust for more processing power and faster, bigger video cards.
I know that Ram doesn't take longer to access if you have more of it, but should we assume everyone has at least 256 megs of Ram?
My question is : Is it normal for such relatively small games that take up 8 or 10 megs of hard disk space to occupy 150 megs of ram? If it is, then allright, thank you very much, I just wanted to know :)
Or could it be lazy memory management? And does it matter if that's the case?
07/13/2005 (10:20 pm)
@Jason : I totally understand your point of view, I used to think that when I was playing games like Ultima 7. At the time it required the ultimate machine to run it and it messed up/optimized your memory based on a system they designed, called Voodoo, I think. Anyways, point is, they were asking for RAM, but they used it.I'm just thinking that T2d gives us a chance to make awesomely slick games without straining the hardware too much, so that pretty much anyone can play. I understand it is a valid goal to try to push the modern PCs to the limit; everybody I know who programs has a lust for more processing power and faster, bigger video cards.
I know that Ram doesn't take longer to access if you have more of it, but should we assume everyone has at least 256 megs of Ram?
My question is : Is it normal for such relatively small games that take up 8 or 10 megs of hard disk space to occupy 150 megs of ram? If it is, then allright, thank you very much, I just wanted to know :)
Or could it be lazy memory management? And does it matter if that's the case?
#3
If you're already assuming they have 3D acceleration (and you are, using Torque2D), assuming they have 256 megs of memory is an extremely safe bet. Also keep in mind that just because a process has 256 megabytes of memory "allocated" doesn't mean it needs to access it all at once, and the virtual memory systems in modern OSes (Windows 2000 and higher, Mac OS X, recent versions of Linux) are good at what they do, so let them have at it.
In my experience, it is pretty useless to worry about things like this before they are known to be an actual problem. Believe me, I used to do it too... worry about squeezing every last bit out of the memory and every last cycle out of the cpu... This path leads to tons of wasted time these days. It is far better to just pick a minimum spec configuration and test on that and only address performance problems if and when they arise. Which isn't to say you shouldn't worry about things like writing efficient algorithms, but if you find yourself starting to worry about wasting a few kilobytes here or a couple of nanoseconds when you're in a non-critical loop, walk away and drink some beer.
Also keep in mind that while it may seem like the "right thing to do" to clean up your GUI resources and menus when they aren't being shown, it is quite probable that the overall user experience will actually be better if you just leave them in memory because sooner or later the user is going to exit out and go back to those screens and it is almost always faster for the vm/paging system to pull that memory data from on-disk pages than it is to rebuild all those objects again, with a bunch of new memory allocations and such. So don't look at that as "wasted memory" but rather "smart caching of data that is likely to be needed again in the future, that I get for free from the OS".
07/13/2005 (11:05 pm)
"but should we assume everyone has at least 256 megs of Ram"If you're already assuming they have 3D acceleration (and you are, using Torque2D), assuming they have 256 megs of memory is an extremely safe bet. Also keep in mind that just because a process has 256 megabytes of memory "allocated" doesn't mean it needs to access it all at once, and the virtual memory systems in modern OSes (Windows 2000 and higher, Mac OS X, recent versions of Linux) are good at what they do, so let them have at it.
In my experience, it is pretty useless to worry about things like this before they are known to be an actual problem. Believe me, I used to do it too... worry about squeezing every last bit out of the memory and every last cycle out of the cpu... This path leads to tons of wasted time these days. It is far better to just pick a minimum spec configuration and test on that and only address performance problems if and when they arise. Which isn't to say you shouldn't worry about things like writing efficient algorithms, but if you find yourself starting to worry about wasting a few kilobytes here or a couple of nanoseconds when you're in a non-critical loop, walk away and drink some beer.
Also keep in mind that while it may seem like the "right thing to do" to clean up your GUI resources and menus when they aren't being shown, it is quite probable that the overall user experience will actually be better if you just leave them in memory because sooner or later the user is going to exit out and go back to those screens and it is almost always faster for the vm/paging system to pull that memory data from on-disk pages than it is to rebuild all those objects again, with a bunch of new memory allocations and such. So don't look at that as "wasted memory" but rather "smart caching of data that is likely to be needed again in the future, that I get for free from the OS".
#4
07/13/2005 (11:07 pm)
Sweet, thank you for what I feel is a very competent response. Appreciate it.
#5
I get the impression that Torque 2D is looking forward rather than backwards (and rightly so, I think). Memory is becoming cheaper, 3D acceleration is now standard on even low end machines. Minimum specs are going to increase over the next few years as they always do, and I think an engine like torque 2D wants to be ready to capitalise, rather than worry about legacy support that will simply be obsolete in a couple of years.
07/14/2005 (2:36 am)
Agree entirely w/ George. These days, and especially when it comes to graphic pipelines, it's very difficult to get a good handle on exactly how memory allocation within your app will happen. It will always vary from machine to machine based on OS, pipeline (OGL/DX), graphics driver, hardware, etc. . . Basically, the only thing you can do to get an accurate idea of whether your game will run on your intended minimum spec is to get ahold of such a machine and test extensively... but even that is not to say that it's going to work on all 'similar' machines. This is why console developers can squeeze better performance out of 'low-spec', (at least when compared with PC equivalents) machines.. because they can focus on a non-moving target, and get to know all aspects of the hardware and OS very, very well.I get the impression that Torque 2D is looking forward rather than backwards (and rightly so, I think). Memory is becoming cheaper, 3D acceleration is now standard on even low end machines. Minimum specs are going to increase over the next few years as they always do, and I think an engine like torque 2D wants to be ready to capitalise, rather than worry about legacy support that will simply be obsolete in a couple of years.
#6
While I'm a fan of not prematurely optimizing, one must understand that sometimes this kind of thinking can get one in a situation that one cannot get out of without fundamentally changing the game.
If, when you started the project, you wanted to make sure that computers of a certain speed or memory footprint can still play your game, you have to do some things at the beginning of the project to ensure this. You have to develop budgets for your sprites and tile data and not exceed these budgets. You have to develop resource management technology to unload game resorces that are not in use. And so forth.
If you don't do this early on, you may get into a situation where you realize late in the game development process that your game doesn't fit on certain machines. Sure, we have virtual memory, but that's unplayably slow if you're thrashing it. At this stage, you can either take the game back to its design and redesign it to live within memory limitations, start ripping out things with little regard for design, or increase the required specs. The first one costs you lots of time, the second may leave you with an unpolished game, and the third is unfortunate.
07/14/2005 (11:50 am)
Quote:In my experience, it is pretty useless to worry about things like this before they are known to be an actual problem.
While I'm a fan of not prematurely optimizing, one must understand that sometimes this kind of thinking can get one in a situation that one cannot get out of without fundamentally changing the game.
If, when you started the project, you wanted to make sure that computers of a certain speed or memory footprint can still play your game, you have to do some things at the beginning of the project to ensure this. You have to develop budgets for your sprites and tile data and not exceed these budgets. You have to develop resource management technology to unload game resorces that are not in use. And so forth.
If you don't do this early on, you may get into a situation where you realize late in the game development process that your game doesn't fit on certain machines. Sure, we have virtual memory, but that's unplayably slow if you're thrashing it. At this stage, you can either take the game back to its design and redesign it to live within memory limitations, start ripping out things with little regard for design, or increase the required specs. The first one costs you lots of time, the second may leave you with an unpolished game, and the third is unfortunate.
Torque Owner Jason Swearingen
But i know what you are saying.. is there any way in T2D to get this type of data?
Like what is using what memory, and maybe a good fps count, etc.