ActivateGhosting and object streaming
by Dave Young · in Torque Game Engine · 07/09/2007 (11:21 am) · 61 replies
Yes, this is a ghosting question!
After reading through a great many posts on ghosting, and TDN articles, I thought I had my brain wrapped around ghosting and scoping.
So I began to dig into the engine in an attempt to research more to try and get some form of object streaming in.
*Assumption* Reading through the TDN and forum posts about ghosting, it seemed like onCameraScopeQuery was where it was decided which objects to scope/ghost, and thus theoretically there was no need to worry about object streaming, as they wouldnt get ghosted to the client unless they were determined to be in scope.
Yet when I was looking into void NetConnection::activateGhosting(), I can see that every object is put into scope if it is ghostable at all, seemingly ignoring other scoping rules. I was wondering about this as it seems to decrease mission loading speeds, especially if there are a lot of objects. I tested it by putting an object on the far edge of a terrain and seeing if it got unpacked as a ghost in the ghosting process, which it did.
I was hoping that it would get sent down as I moved towards it, but it got sent down in mission load instead.
Thoughts?
Goals:
Decrease mission loading time on an object heavy server (like a large TGEA map)
After reading through a great many posts on ghosting, and TDN articles, I thought I had my brain wrapped around ghosting and scoping.
So I began to dig into the engine in an attempt to research more to try and get some form of object streaming in.
*Assumption* Reading through the TDN and forum posts about ghosting, it seemed like onCameraScopeQuery was where it was decided which objects to scope/ghost, and thus theoretically there was no need to worry about object streaming, as they wouldnt get ghosted to the client unless they were determined to be in scope.
Yet when I was looking into void NetConnection::activateGhosting(), I can see that every object is put into scope if it is ghostable at all, seemingly ignoring other scoping rules. I was wondering about this as it seems to decrease mission loading speeds, especially if there are a lot of objects. I tested it by putting an object on the far edge of a terrain and seeing if it got unpacked as a ghost in the ghosting process, which it did.
I was hoping that it would get sent down as I moved towards it, but it got sent down in mission load instead.
Thoughts?
Goals:
Decrease mission loading time on an object heavy server (like a large TGEA map)
#42
--if your goal is to have the total memory footprint (how much "Memory Used" shows up in the Performance Monitor), go both down and up as different parts of the mission are loaded, you will be looking at re-writing the memory management system from the ground up. That would take probably 3-12 man-months depending on your skillset and understanding of the current system.
--Torque manages memory in multiple ways. The two most fundamental are:
----Torque keeps track of all memory that the operating system has released to the application. When an object is deleted from the system, it's memory is released back to Torque's Memory Manager, which holds the memory until another object within Torque asks for memory, and then allocates it directly. If/when the Torque Memory Manager doesn't have enough memory on hand to meet the request, it will ask the operating system for a large chunk of memory (not dependent on the actual amount of memory immediately needed), and give part of it over to the object requesting the immediate assignment of memory.
----Torque also keeps track of what objects have been allocated, and when it's possible, will use a single object in memory and apply the concept of reference counts. This saves memory, because instead of simply giving memory to a new request every single time, it will check to see if a valid "instance" of this object can be referenced instead, and if so, it will return a pointer to this instance instead. When all references to an area of memory are cleared, it will release that memory back to the Memory Manager, which will hold it for future allocations.
Now, back to the use case:
--If an object comes into scope, it will be networked to the appropriate client.
--When this client receives a ghost object that is not currently instantiated, it will ask for a reference to the object from the Resource Manager, and will use that reference.
--The resource manager will check to see if a valid reference to this object already exists in allocated memory, and if so return that reference. If not, it will ask the Memory Manager for enough memory to create an instance, and provide a reference to that new instance.
--when an object goes out of scope for a particular client on the server, it will no longer be ghosted, and will eventually (a few network/processing cycles) be removed from the client simulation.
----The resource manager will decrement the reference count for that instance, and if it reaches zero, it will release the memory back to the Memory Manager
----It is important to note that to someone "outside" of the application, the memory footprint of Torque will never "go down in megs", because of how the Memory Manager works--it keeps all memory, "planning ahead" for future requests.
If I read Stefan's post accurately, what he is saying is that he did experiment with releasing memory directly back to the OS (instead of leaving it in the Memory Manager), and that the benefits simply were not worth the effort--the memory footprint may (or may not, he didn't say) have gone down, but the delays (probably seen as "hitching") made it simply not worth it--which is why the entire system works the way it does in the first place.
07/12/2007 (9:21 am)
I think there is a communications issue here (and be nice Stefan!), so let me try to describe a little more in detail.--if your goal is to have the total memory footprint (how much "Memory Used" shows up in the Performance Monitor), go both down and up as different parts of the mission are loaded, you will be looking at re-writing the memory management system from the ground up. That would take probably 3-12 man-months depending on your skillset and understanding of the current system.
--Torque manages memory in multiple ways. The two most fundamental are:
----Torque keeps track of all memory that the operating system has released to the application. When an object is deleted from the system, it's memory is released back to Torque's Memory Manager, which holds the memory until another object within Torque asks for memory, and then allocates it directly. If/when the Torque Memory Manager doesn't have enough memory on hand to meet the request, it will ask the operating system for a large chunk of memory (not dependent on the actual amount of memory immediately needed), and give part of it over to the object requesting the immediate assignment of memory.
----Torque also keeps track of what objects have been allocated, and when it's possible, will use a single object in memory and apply the concept of reference counts. This saves memory, because instead of simply giving memory to a new request every single time, it will check to see if a valid "instance" of this object can be referenced instead, and if so, it will return a pointer to this instance instead. When all references to an area of memory are cleared, it will release that memory back to the Memory Manager, which will hold it for future allocations.
Now, back to the use case:
--If an object comes into scope, it will be networked to the appropriate client.
--When this client receives a ghost object that is not currently instantiated, it will ask for a reference to the object from the Resource Manager, and will use that reference.
--The resource manager will check to see if a valid reference to this object already exists in allocated memory, and if so return that reference. If not, it will ask the Memory Manager for enough memory to create an instance, and provide a reference to that new instance.
--when an object goes out of scope for a particular client on the server, it will no longer be ghosted, and will eventually (a few network/processing cycles) be removed from the client simulation.
----The resource manager will decrement the reference count for that instance, and if it reaches zero, it will release the memory back to the Memory Manager
----It is important to note that to someone "outside" of the application, the memory footprint of Torque will never "go down in megs", because of how the Memory Manager works--it keeps all memory, "planning ahead" for future requests.
If I read Stefan's post accurately, what he is saying is that he did experiment with releasing memory directly back to the OS (instead of leaving it in the Memory Manager), and that the benefits simply were not worth the effort--the memory footprint may (or may not, he didn't say) have gone down, but the delays (probably seen as "hitching") made it simply not worth it--which is why the entire system works the way it does in the first place.
#43
07/12/2007 (10:31 am)
Excellent stuff Stephen. This needs to into TDN. Where would you think it would be most appropriate?
#44
I do want to disclaim one of the things I said though:
Keep in mind that the design requirement here is to optimize memory management, which is what the Torque Memory Manager is for. You can of course simply turn off the TMM (it's a #define), and that will remove the whole watermarking system and allow you to work directly with the OS memory allocation/deallocation mechanisms, but you lose all of the performance optimizations that the TMM gives you. My work estimation was for someone wanting to re-write the system to both be optimized, and allow for releasing memory back to the OS, which is a difficult challenge.
07/12/2007 (10:36 am)
Depends really, hehe...it's so cross-system that the whole process is hard to classify.I do want to disclaim one of the things I said though:
Quote:
-if your goal is to have the total memory footprint (how much "Memory Used" shows up in the Performance Monitor), go both down and up as different parts of the mission are loaded, you will be looking at re-writing the memory management system from the ground up. That would take probably 3-12 man-months depending on your skillset and understanding of the current system.
Keep in mind that the design requirement here is to optimize memory management, which is what the Torque Memory Manager is for. You can of course simply turn off the TMM (it's a #define), and that will remove the whole watermarking system and allow you to work directly with the OS memory allocation/deallocation mechanisms, but you lose all of the performance optimizations that the TMM gives you. My work estimation was for someone wanting to re-write the system to both be optimized, and allow for releasing memory back to the OS, which is a difficult challenge.
#45
Stephen or Pat, feel free to edit for clarity and/or accuracy.
07/12/2007 (10:44 am)
Added Stephen's explanation to this part of the MemoryManager TDN article.Stephen or Pat, feel free to edit for clarity and/or accuracy.
#46
Well this is going to be a space vs. time thing, than. Torque saves time by preloading data blocks which are flagged with "preload = true;". The preloading step usually involves creating shape instances/textures. That memory is kept around because, in the case Torque was designed for, there are few unique models, and many instances of them. (A bunch of FPS guys running around) A MMO is different in that it has a few cases where there are several instances of the same resource, and then there are also many unique resources.
This is not a trivial problem. This is a scenario where it will probably be a really good idea to figure out what the requirements are with regard to number of models, how many different things will be in an area of the world, probably some sort of datablock management wherein there are kind of psudo-"zones" in the world where you would have a set of datablocks that the zone used, so that data could be loaded and unloaded in an intelligent way.
I would figure out the different kinds of geometry/texture data that exist in the requirements. Things that come to mind:
1. World geometry/texture information. These information sets would be, I would think, per "zone" weather the zones are seamless (WoW style) or not seamless (EQ style). If it is seamless than proper zone design would have to take into account how many of these data sets could be in memory at once, since this is the data that would be required to render the environment. Terrain, terrain textures, static geometry etc. This is the easiest case to manage.
2. Predictable geometry/texture information. This would be things like monster models/textures. This set of information could be managed and subdivided with various methods, either automatic or manual (at level design time). It would be required that 0..n of these data sets could be in memory at once, where 'n' is a number that gets decided, and then respected in design. Note that things like players attacking monsters and dragging them places could effect this.
3. Dynamic gemoetry/texture information. Player characters. There would need to be 0..n support for this, where n is the maximum number of players that can be in a given area.
Anyway, not a trivial problem. Nothing is.
07/12/2007 (10:46 am)
Oh those resources.Well this is going to be a space vs. time thing, than. Torque saves time by preloading data blocks which are flagged with "preload = true;". The preloading step usually involves creating shape instances/textures. That memory is kept around because, in the case Torque was designed for, there are few unique models, and many instances of them. (A bunch of FPS guys running around) A MMO is different in that it has a few cases where there are several instances of the same resource, and then there are also many unique resources.
This is not a trivial problem. This is a scenario where it will probably be a really good idea to figure out what the requirements are with regard to number of models, how many different things will be in an area of the world, probably some sort of datablock management wherein there are kind of psudo-"zones" in the world where you would have a set of datablocks that the zone used, so that data could be loaded and unloaded in an intelligent way.
I would figure out the different kinds of geometry/texture data that exist in the requirements. Things that come to mind:
1. World geometry/texture information. These information sets would be, I would think, per "zone" weather the zones are seamless (WoW style) or not seamless (EQ style). If it is seamless than proper zone design would have to take into account how many of these data sets could be in memory at once, since this is the data that would be required to render the environment. Terrain, terrain textures, static geometry etc. This is the easiest case to manage.
2. Predictable geometry/texture information. This would be things like monster models/textures. This set of information could be managed and subdivided with various methods, either automatic or manual (at level design time). It would be required that 0..n of these data sets could be in memory at once, where 'n' is a number that gets decided, and then respected in design. Note that things like players attacking monsters and dragging them places could effect this.
3. Dynamic gemoetry/texture information. Player characters. There would need to be 0..n support for this, where n is the maximum number of players that can be in a given area.
Anyway, not a trivial problem. Nothing is.
#47
If you turn off the Torque memory manager, let Windows manage memory, and call SetProcessWorkingSetSize(-1), you'll see Mem Usage drop substantially (or just minimize the window).
I'm not recommending this, and it's a bad idea.
See here:
http://support.microsoft.com/kb/293215
and here:
http://shsc.info/WindowsMemoryManagement
Also, turning off the TMM doesn't remove your ability to use the FrameAllocator (the watermarking allocator). We have TMM turned off, and the FrameAllocator is still in place. They're two different animals.
In general, both the memory manager in Windows 2000/XP and newer, and the Linux MM are more than good enough, IMHO, and you'd probably be wasting your time to reimplement something to manage memory yourself, unless the working set of your app is close to using up all the physical memory on the machine (depends on your min. spec., really -- if that's the case, you have to consider the cost/benefit of trying to support that min. spec., or maybe just biting the bullet and bumping your min. spec.).
No one has specifically mentioned it, but I'm guessing that this thread is largely concerned with client-side or standalone memory consumption, and not server-side.
If your goal is to save some heap on the server-side, one easy fix is to have the ResourceManager purge all the bitmaps after mission load. It doesn't need the bitmaps, it just needs the geometries. This saves a fairly sizable chunk of heap. I added callbacks from bitmapGif, bitmapJpeg, bitmapPng, to do that.
Sorry, I don't mean to sidetrack the ghosting thread.
07/12/2007 (11:59 am)
Quote:
--if your goal is to have the total memory footprint (how much "Memory Used" shows up in the Performance Monitor), go both down and up as different parts of the mission are loaded, you will be looking at re-writing the memory management system from the ground up. That would take probably 3-12 man-months depending on your skillset and understanding of the current system.
If you turn off the Torque memory manager, let Windows manage memory, and call SetProcessWorkingSetSize(-1), you'll see Mem Usage drop substantially (or just minimize the window).
I'm not recommending this, and it's a bad idea.
See here:
http://support.microsoft.com/kb/293215
and here:
http://shsc.info/WindowsMemoryManagement
Also, turning off the TMM doesn't remove your ability to use the FrameAllocator (the watermarking allocator). We have TMM turned off, and the FrameAllocator is still in place. They're two different animals.
In general, both the memory manager in Windows 2000/XP and newer, and the Linux MM are more than good enough, IMHO, and you'd probably be wasting your time to reimplement something to manage memory yourself, unless the working set of your app is close to using up all the physical memory on the machine (depends on your min. spec., really -- if that's the case, you have to consider the cost/benefit of trying to support that min. spec., or maybe just biting the bullet and bumping your min. spec.).
No one has specifically mentioned it, but I'm guessing that this thread is largely concerned with client-side or standalone memory consumption, and not server-side.
If your goal is to save some heap on the server-side, one easy fix is to have the ResourceManager purge all the bitmaps after mission load. It doesn't need the bitmaps, it just needs the geometries. This saves a fairly sizable chunk of heap. I added callbacks from bitmapGif, bitmapJpeg, bitmapPng, to do that.
Sorry, I don't mean to sidetrack the ghosting thread.
#48
Sorry, I should clarify... a *dedicated* server doesn't need them.
07/12/2007 (1:52 pm)
Quote:
If your goal is to save some heap on the server-side, one easy fix is to have the ResourceManager purge all the bitmaps after mission load. It doesn't need the bitmaps, it just needs the geometries. This saves a fairly sizable chunk of heap. I added callbacks from bitmapGif, bitmapJpeg, bitmapPng, to do that.
Sorry, I should clarify... a *dedicated* server doesn't need them.
#49
07/12/2007 (2:00 pm)
AFAIK, dedicated servers don't load bitmaps or textures.
#50
(gdb) bt
#0 GBitmap::readPNG (this=0x99ffa88, io_rStream=@0x99e78a0)
at dgl/bitmapPng.cc:98
#1 0x084249a8 in Interior::read (this=0x99e9980, stream=@0x99e78a0)
at interior/interiorIO.cc:210
#2 0x08439a54 in InteriorResource::read (this=0x99e98d0, stream=@0x99e78a0)
at interior/interiorRes.cc:89
#3 0x0843a611 in constructInteriorDIF (stream=@0x99e78a0)
at interior/interiorRes.cc:319
#4 0x0820896c in ResManager::loadInstance (this=0x9195ba8, obj=0x92be030,
computeCRC=true) at core/resManager.cc:749
#5 0x0820a04a in ResManager::load (this=0x9195ba8,
fileName=0x99aaac0 "projects/minimal/worlds/plaza.dif", computeCRC=true)
at core/resManager.cc:696
#6 0x0842c5b5 in InteriorInstance::onAdd (this=0x99e7370)
at interior/interiorInstance.cc:492
#7 0x081f9c77 in SimObject::registerObject (this=0x99e7370)
at console/simManager.cc:388
#8 0x081d1e2c in CodeBlock::exec (this=0x99c9578, ip=1805,
functionName=0x99aa288 "projects/minimal/missions/minimal.mis",
thisNamespace=0x0, argc=0, argv=0x0, noCalls=false, packageName=0x0)
at console/compiledEval.cc:617
#9 0x081da7b1 in CodeBlock::compileExec (this=0x99c9578,
fileName=0x99aa288 "projects/minimal/missions/minimal.mis",
And:
Breakpoint 2, GBitmap::readDone (this=0x99ffae8) at dgl/gBitmap.cc:93
93 if (GameInterface::getInterface()->isDedicated())
(gdb) print this
$8 = (GBitmap * const) 0x99ffae8
(gdb) print *this
$9 = { = {_vptr.ResourceInstance = 0x85e5200,
mSourceResource = 0x0}, static sBitmapIdSource = 0,
internalFormat = GBitmap::RGB, pBits = 0x9a2dc48 "", byteSize = 98304,
width = 128, height = 256, bytesPerPixel = 3, numMipLevels = 1,
mipLevelOffsets = {0, 4294967295}, pPalette = 0x0,
static csFileVersion = 3}
You can see pBits points to a heap allocation.
This is in 1.3.x. Perhaps it was changed/fixed in a later release?
07/12/2007 (2:30 pm)
I run dedicated builds a lot... here's a stack:(gdb) bt
#0 GBitmap::readPNG (this=0x99ffa88, io_rStream=@0x99e78a0)
at dgl/bitmapPng.cc:98
#1 0x084249a8 in Interior::read (this=0x99e9980, stream=@0x99e78a0)
at interior/interiorIO.cc:210
#2 0x08439a54 in InteriorResource::read (this=0x99e98d0, stream=@0x99e78a0)
at interior/interiorRes.cc:89
#3 0x0843a611 in constructInteriorDIF (stream=@0x99e78a0)
at interior/interiorRes.cc:319
#4 0x0820896c in ResManager::loadInstance (this=0x9195ba8, obj=0x92be030,
computeCRC=true) at core/resManager.cc:749
#5 0x0820a04a in ResManager::load (this=0x9195ba8,
fileName=0x99aaac0 "projects/minimal/worlds/plaza.dif", computeCRC=true)
at core/resManager.cc:696
#6 0x0842c5b5 in InteriorInstance::onAdd (this=0x99e7370)
at interior/interiorInstance.cc:492
#7 0x081f9c77 in SimObject::registerObject (this=0x99e7370)
at console/simManager.cc:388
#8 0x081d1e2c in CodeBlock::exec (this=0x99c9578, ip=1805,
functionName=0x99aa288 "projects/minimal/missions/minimal.mis",
thisNamespace=0x0, argc=0, argv=0x0, noCalls=false, packageName=0x0)
at console/compiledEval.cc:617
#9 0x081da7b1 in CodeBlock::compileExec (this=0x99c9578,
fileName=0x99aa288 "projects/minimal/missions/minimal.mis",
And:
Breakpoint 2, GBitmap::readDone (this=0x99ffae8) at dgl/gBitmap.cc:93
93 if (GameInterface::getInterface()->isDedicated())
(gdb) print this
$8 = (GBitmap * const) 0x99ffae8
(gdb) print *this
$9 = {
mSourceResource = 0x0}, static sBitmapIdSource = 0,
internalFormat = GBitmap::RGB, pBits = 0x9a2dc48 "", byteSize = 98304,
width = 128, height = 256, bytesPerPixel = 3, numMipLevels = 1,
mipLevelOffsets = {0, 4294967295
static csFileVersion = 3}
You can see pBits points to a heap allocation.
This is in 1.3.x. Perhaps it was changed/fixed in a later release?
#51
And if so, that's pretty interesting. I assumed such assets were never loaded because our memory footprint was so much lower on the server than on the client, and the fact that onPreload () has several isServer () like booleans in place.
07/12/2007 (2:49 pm)
Nah, it most likely never changed :) You're probably right. I'm suprised though.And if so, that's pretty interesting. I assumed such assets were never loaded because our memory footprint was so much lower on the server than on the client, and the fact that onPreload () has several isServer () like booleans in place.
#52
And I'm guessing based on Tim's post that the other dedicated build is on windows?
The boot up and initialization paths are a bit different on the two operating systems--linux simply doesn't even create the rendering devices, whereas the windows build does create the device, etc, just never uses it to render.
It's been a long time since I've looked at it, but I do remember seeing differences in how dedicated startups worked on the two platforms.
07/12/2007 (2:55 pm)
@Stephan: you are running your dedicated builds on linux, correct?And I'm guessing based on Tim's post that the other dedicated build is on windows?
The boot up and initialization paths are a bit different on the two operating systems--linux simply doesn't even create the rendering devices, whereas the windows build does create the device, etc, just never uses it to render.
It's been a long time since I've looked at it, but I do remember seeing differences in how dedicated startups worked on the two platforms.
#53
I wonder if it would be much work to make them not load by default on the server in TGE, and if there would be any issues with that.
07/12/2007 (3:15 pm)
Aye, but we used Windows later on. Actually, I recall we used a stripped out GFX layer so my assumptions above are not valid since that might have been what caused textures not to load on the server.I wonder if it would be much work to make them not load by default on the server in TGE, and if there would be any issues with that.
#54
Well, I searched through my 1.3 codebase, and I can't find 'onPreload' or 'isServer'! :)
This code behaves the same on a Windows dedicated server, BTW.
07/12/2007 (3:19 pm)
Quote:
Nah, it most likely never changed :) You're probably right. I'm suprised though.
And if so, that's pretty interesting. I assumed such assets were never loaded because our memory footprint was so much lower on the server than on the client, and the fact that onPreload () has several isServer () like booleans in place.
Well, I searched through my 1.3 codebase, and I can't find 'onPreload' or 'isServer'! :)
This code behaves the same on a Windows dedicated server, BTW.
#55
07/12/2007 (3:24 pm)
IIRC, most of the differences were at the TorqueScript layer, but again, I'm reaching here--this was stuff I looked at back in early 2005.
#56
That's the function I was thinking of. It's called each time the object is created or gets a datablock assigned, can't remember which. :P
07/12/2007 (4:26 pm)
bool ShapeBaseData::preload (bool server, char errorBuffer [256])
{
...
}That's the function I was thinking of. It's called each time the object is created or gets a datablock assigned, can't remember which. :P
#57
Back to the original thread!
Dave, I wish I could share my code with you as a resource here on GG, but it's in a commercial product.
Essentially, this is precisely what I've done in our project, and it will help your mission load times quite a bit.
I view this as a bandage, though, since it should be possible to only load geometries on the client, spawn the client on the server, and load in all the static shapes and data blocks after the fact.
If you take a look at my post in this thread, I describe what my code does:
http://www.garagegames.com/mg/forums/result.thread.php?qt=23589
It sounds like you've implemented the same thing, more or less.
As far as using multiple server processes to simulate a larger world with seamless transitions, you might be interested in checking out Multiverse (I'm not suggesting you use their product, but the white papers there are useful to take a look at).
If you're serious about persuing an implementation of that inside TGE, lemme know! I'd love to exchange ideas.
07/13/2007 (10:24 am)
Quote:
I have already prototyped datablock caching, so they are written out and saved the first time they are received. I've done this as both a single flat file (preserving order) and individually, one at a time. The idea is to local exec them before the main menu, and not clear or retransmit them between map loads unless it's a new datablock which came into play sometime after the initial blast or distribution. This should take quite a bit of time off mission load/zoning.
The next step is to take all static objects and do the same thing, cache them and synch them and do a kind of local exec/creation, working out some method of proximity loading (perhaps based on normal scoping rules, or some other block enter/exit scheme). I would like the servers to only be ghosting dynamic objects, and have several servers responsible for a single large map.
Back to the original thread!
Dave, I wish I could share my code with you as a resource here on GG, but it's in a commercial product.
Essentially, this is precisely what I've done in our project, and it will help your mission load times quite a bit.
I view this as a bandage, though, since it should be possible to only load geometries on the client, spawn the client on the server, and load in all the static shapes and data blocks after the fact.
If you take a look at my post in this thread, I describe what my code does:
http://www.garagegames.com/mg/forums/result.thread.php?qt=23589
It sounds like you've implemented the same thing, more or less.
As far as using multiple server processes to simulate a larger world with seamless transitions, you might be interested in checking out Multiverse (I'm not suggesting you use their product, but the white papers there are useful to take a look at).
If you're serious about persuing an implementation of that inside TGE, lemme know! I'd love to exchange ideas.
#58
I just looked over that thread (which I recently read in relations to MTU on a whole different topic!) and completely missed your writeup. It is very encouraging to see a similar solution has been approached.
It's 100% pertinent to this discussion. There were some other followup posts to it as well which further increase my understanding.
I am curious on one point though, you mention in a couple of places the server sending garbage or uninitialized stuff over the wire. What's a good example of that?
07/13/2007 (10:34 am)
Tim, thank you for the encouragement.I just looked over that thread (which I recently read in relations to MTU on a whole different topic!) and completely missed your writeup. It is very encouraging to see a similar solution has been approached.
It's 100% pertinent to this discussion. There were some other followup posts to it as well which further increase my understanding.
I am curious on one point though, you mention in a couple of places the server sending garbage or uninitialized stuff over the wire. What's a good example of that?
#59
It's not pretty, but it covers pretty much every function call (some are glossed over) from the client's initial "join()" call, to server side GameConnection::onClientEnterGame().
07/13/2007 (10:35 am)
I didn't see a link from a previous post, so if you are interested in the low level details of the connection sequence (including execution flow of the mission loading sequence), take a look at the Connection Sequence Overview.It's not pretty, but it covers pretty much every function call (some are glossed over) from the client's initial "join()" call, to server side GameConnection::onClientEnterGame().
#60
Well, I fixed about a dozen to two dozen places where some member variable of a NetObject derived class was serialized to the client before it was ever init'ed (to zero or whatever). It doesn't exhibit as a bug in normal client/server behavior, because the client doesn't actually do something important with these values in many cases (the reason they are sent is because the mask bits are set to -1/0xFF... at mission load time for the static objects).
However, it did interfere with my implementation of caching, because I CRC the entire stream (there's no easy way to know which bits are important and which aren't, so you have to presume they're all important).
Without those fixes, one run of the caching code produces a file with a CRC that won't match a subsequent run.
Do you already have a mechanism for determining if a client cache is valid or not?
07/13/2007 (11:31 am)
Quote:
I am curious on one point though, you mention in a couple of places the server sending garbage or uninitialized stuff over the wire. What's a good example of that?
Well, I fixed about a dozen to two dozen places where some member variable of a NetObject derived class was serialized to the client before it was ever init'ed (to zero or whatever). It doesn't exhibit as a bug in normal client/server behavior, because the client doesn't actually do something important with these values in many cases (the reason they are sent is because the mask bits are set to -1/0xFF... at mission load time for the static objects).
However, it did interfere with my implementation of caching, because I CRC the entire stream (there's no easy way to know which bits are important and which aren't, so you have to presume they're all important).
Without those fixes, one run of the caching code produces a file with a CRC that won't match a subsequent run.
Do you already have a mechanism for determining if a client cache is valid or not?
Torque Owner Stefan Lundmark