How does video memory work exactly?
by Tomas Dahle · in Torque Game Builder · 03/28/2005 (4:08 am) · 12 replies
Just a little question that has been in the back of my mind.
How much graphics can I load at the same time? Is it restricted to exactly what can fit in the cards video memory? Or does T2D load anything I ask it to and only put on-screen graphics into memory? And if I happen to run out of video-memory, can T2D automatically reduce the resolution of my graphics? (well except for fonts, that would be terrible hehe)
I'm basically wondering if I have to count bytes in order to reach a certain target minimum requirement for video-memory or if T2D does that work for me.
How much graphics can I load at the same time? Is it restricted to exactly what can fit in the cards video memory? Or does T2D load anything I ask it to and only put on-screen graphics into memory? And if I happen to run out of video-memory, can T2D automatically reduce the resolution of my graphics? (well except for fonts, that would be terrible hehe)
I'm basically wondering if I have to count bytes in order to reach a certain target minimum requirement for video-memory or if T2D does that work for me.
About the author
#2
03/28/2005 (6:54 am)
I want world peace and a ham sandwich. I guess If I want the ham sandwich bad enough though, I can just make it myself
#3
03/28/2005 (7:24 am)
Rofl... sorry two great posts in a row :)
#4
No, but seriously, that seems like a really small problem to deal with. I'd rather have networking and bug fixes than 'fluff features' that dont actually bring anything to the table.
03/28/2005 (7:24 am)
Yea, i'd like a working replica of the original Batmobile. Also, I'd like to give all penguins opposable thumbs, I really think they could use them.No, but seriously, that seems like a really small problem to deal with. I'd rather have networking and bug fixes than 'fluff features' that dont actually bring anything to the table.
#5
03/28/2005 (7:26 am)
Think of it this way... if you specify a "chunky" it tells it to look for optimizations as mentioned... this check takes time... so if you don't need it, if you keep your textures to POT then you don't need to waste the time checking it ... so if you don't mind being less efficient then just make them all a "chunky"; however, efficieny is the name of the game :)
#6
03/28/2005 (8:01 am)
Strictly speaking, though, chunking an image, as I understand it, is done for the benefit of older graphics cards. It's actually slower to display an image in chunks than as a single image, but the point still stands. I'd rather have the option to either cater to the lower end tech, or go balls to the wall, so to speak.
#7
03/28/2005 (8:40 am)
This is also EA1, and it's been mentioned by Melv that they plan to make this aspect more transparent to the coder. But you have to wait for it. :)
#8
It's a performance issue. Rendering one larger texture is faster than rendering a number of smaller ones. Because it's a performance issue, the resolution of which way to handle it must be in the user's hands.
Even when they integrate the two, there will need to be some user control over whether or not the image gets cut into smaller textures for Voodoo compatibility (or for rendering really, really big stuff on more modern cards).
03/28/2005 (12:51 pm)
Quote:I would like all types of images handled automagically. I see no reason why I have to specify an fx scroller or chunky, when I could just be sticking with the same command....
It's a performance issue. Rendering one larger texture is faster than rendering a number of smaller ones. Because it's a performance issue, the resolution of which way to handle it must be in the user's hands.
Even when they integrate the two, there will need to be some user control over whether or not the image gets cut into smaller textures for Voodoo compatibility (or for rendering really, really big stuff on more modern cards).
#9
Also don't forget video buffer sizes. Usually two framebuffers, or three if you are using triple buffering. Then there is the accumulation buffer and depth/stencil buffer. (Usually one of the same on modern graphics cards.) That takes up video RAM. Then, there are the primitives sent to the card itself. And so on.
There are some diagnostic tools that will do things like act as a wrapper for OpenGL calls and query the card and find out exactly what sort of memory usage it's getting, such as gDEBugger.
So, in short, no there is not a generic method for handling this.
03/28/2005 (1:35 pm)
Torque itself keeps track of number of textures loaded, the amount of size of each one -- it might even use proxy texture objects to figure out if enough video memory is left to load the texture. There are ways to access this data from the debugging modes in Torque, however, you can never be 100% sure. The video card's drives will transform textures into the card's native format, so don't always expect a one-to-one mapping. (Though, generally, with 32-byte RGBA and packed-pixel 24 bit textures, they use up the same amount of RAM on the card, though their element ordering might be different.)Also don't forget video buffer sizes. Usually two framebuffers, or three if you are using triple buffering. Then there is the accumulation buffer and depth/stencil buffer. (Usually one of the same on modern graphics cards.) That takes up video RAM. Then, there are the primitives sent to the card itself. And so on.
There are some diagnostic tools that will do things like act as a wrapper for OpenGL calls and query the card and find out exactly what sort of memory usage it's getting, such as gDEBugger.
So, in short, no there is not a generic method for handling this.
#10
Well, I would hope that T2D does not ask GL for accumulation buffers and depth/stencil buffers unless you set it up to do so. Depth buffering might be useful in a few 2D games, but it's really more of a feature that 3D games need.
Those don't live on the graphics card. At least, not for our purposes. Because the location of everything in T2D changes almost constantly (under the average-case circumstance), storing vertex data on the video card via the VBO extension is probably not the most reasonable use of that feature. Granted, for tilemaps, it could work if the movement were handled through matrices. But, even then, since tiles are t2dSceneObjects, this is probably not an optimization that they would be able to utilize.
03/28/2005 (3:04 pm)
Quote:Then there is the accumulation buffer and depth/stencil buffer. (Usually one of the same on modern graphics cards.)
Well, I would hope that T2D does not ask GL for accumulation buffers and depth/stencil buffers unless you set it up to do so. Depth buffering might be useful in a few 2D games, but it's really more of a feature that 3D games need.
Quote:Then, there are the primitives sent to the card itself.
Those don't live on the graphics card. At least, not for our purposes. Because the location of everything in T2D changes almost constantly (under the average-case circumstance), storing vertex data on the video card via the VBO extension is probably not the most reasonable use of that feature. Granted, for tilemaps, it could work if the movement were handled through matrices. But, even then, since tiles are t2dSceneObjects, this is probably not an optimization that they would be able to utilize.
#11
This is indeed a waste for a T2D game, where all we care about is just the frame buffer and the blending modes. Layer sorting automatically keeps our quads drawn in proper blending order.
Though the accumulation buffer isn't created by default.
03/28/2005 (5:54 pm)
T2D will create a depth buffer by default. The window creation/OpenGL initialization code is the same as in normal Torque. So, by default, you will get a depth and stencil buffer if it's supported, as that is what T2D askes for. This can be disabled, but you have to do it in the engine code as currently it just assumes you want one.This is indeed a waste for a T2D game, where all we care about is just the frame buffer and the blending modes. Layer sorting automatically keeps our quads drawn in proper blending order.
Though the accumulation buffer isn't created by default.
#12
- Melv.
03/29/2005 (2:23 am)
There is indeed lots of work to be done on the rendering side of things. A clean seperation from TGE and a removal of this legacy code from both the engine and script (common) is important. The delay here has been done on purpose because we're waiting to merge with TGE 1.4 before we permanently branch. We want all that 1.4 yummy goodness for T2D.- Melv.
Torque Owner Teck Lee Tan
T2D, at the moment, has the fxChunkedImage class to split up large textures/images into bite-sized 256x256 chunks, specifically for the purpose of compatibility with older cards with limited video memory. Melv has said that there may be a point where T2D will handle textures without the need to specify how to mange them (chunked, bitmap, etc).