Thinking out loud - Lets focus on Accessibility, Performance and Multi-platform for Torque 3D
by Benjamin Stanley · in Torque 3D Beginner · 02/11/2014 (9:00 pm) · 28 replies
So,
I am going out on a limb here and saying a few things about torque and myself. First off I am not a programmer. I am a 3D/2D Artist that is still learning the torque 3D and 2D pipelines. I know Torque 2D a bit better than I do 3D. Even though I consider myself a 3D artist.
Any ways on to what I came to say about T3D -
I am finding T3D very difficult to use as a first time developer with the Torque 3D engines. Here is what I think will help T3D in the long and short run.
1. More Documentation/Community Wiki - I Would be willing to help with the wiki (Maintaining and Porting tutorials over.)
2. A UI Overhaul with expanded tool tips about functionality of each tool. This is one of the big things that is holding me back. The current UI is very non-intuitive and confusing.
3. More Performance Improvements (Occlusion Culling, Multi threading, etc.)
4. More tools that allow us to Work smarter not Harder. (A Visual Shader Editor for example)
5. A modernized Asset pipeline (OpenGEX maybe?) - http://opengex.org/
I am going out on a limb here and saying a few things about torque and myself. First off I am not a programmer. I am a 3D/2D Artist that is still learning the torque 3D and 2D pipelines. I know Torque 2D a bit better than I do 3D. Even though I consider myself a 3D artist.
Any ways on to what I came to say about T3D -
I am finding T3D very difficult to use as a first time developer with the Torque 3D engines. Here is what I think will help T3D in the long and short run.
1. More Documentation/Community Wiki - I Would be willing to help with the wiki (Maintaining and Porting tutorials over.)
2. A UI Overhaul with expanded tool tips about functionality of each tool. This is one of the big things that is holding me back. The current UI is very non-intuitive and confusing.
3. More Performance Improvements (Occlusion Culling, Multi threading, etc.)
4. More tools that allow us to Work smarter not Harder. (A Visual Shader Editor for example)
5. A modernized Asset pipeline (OpenGEX maybe?) - http://opengex.org/
About the author
My name is Benjamin Stanley. I am a Procedural World enthusiast, 3D artist, and Generally Awesome Guy. I have worked on various projects and Modifications in the past and I am currently looking at making a Free Game with Torque 3D or Torque 2D.
#2
What was the reason(s) for not including occlusion culling in the engine? Just curious.
Because it is now a necessity in today's games to keep a mid to high frame rate. It is right up there with Level of Detail meshes and the like.
Multi-Threading is another one of those performance things that is needs to be done with torque to keep it up to date with today's games.
02/11/2014 (9:42 pm)
@Lukas - I disagree especially with #3. Performance is key when making games. If a player is not experiencing a good frame rate then they will bring it up with the developer/Publisher (One way or another).What was the reason(s) for not including occlusion culling in the engine? Just curious.
Because it is now a necessity in today's games to keep a mid to high frame rate. It is right up there with Level of Detail meshes and the like.
Multi-Threading is another one of those performance things that is needs to be done with torque to keep it up to date with today's games.
#3
My point with the performance was that it's definetly not critical in T3D.. I can make a game that runs with 60FPS with todays version of T3D.. The performance improvements just allows for more "slack", and it's nice for AAA games, but lets face it.. We are all indies here.
Multi-threading and most other performance issues is simply SO MUCH WORK that it just doesn't pay-off atm. In the future sure, but right now we should focus on usability and code architecture etc.. We should get the basics straight before we throw ourselves at making an AAA-engine.
02/11/2014 (9:50 pm)
Read this thread. Edit: basically it's because occlusion culling reduces amount of primitives rendered, however it's the amount of drawcalls thats the issue.My point with the performance was that it's definetly not critical in T3D.. I can make a game that runs with 60FPS with todays version of T3D.. The performance improvements just allows for more "slack", and it's nice for AAA games, but lets face it.. We are all indies here.
Multi-threading and most other performance issues is simply SO MUCH WORK that it just doesn't pay-off atm. In the future sure, but right now we should focus on usability and code architecture etc.. We should get the basics straight before we throw ourselves at making an AAA-engine.
#4
02/11/2014 (9:58 pm)
@Lukas - I see where you are coming from now. :)
#5
Maybe the T3D community should do something similar?
Programmers go together and make a document saying "What features do we need, what do we want and how should they be implemented"
Artists go together and make a document saying "What features do we need, how should they be fit into the editors, what behaviour should it have?
Our internal design document has thorough description of exactly what they need the editors to do, and images that illustrates the issue and how it should be.
There is also images showing how to access the controls (e.g. they might have photo-shopped a button onto the Editor) so the programmers know what features are needed and where to put the stuff.
Apart from that I think the art pipeline is also something we shold look into, as you mentioned, as this seems to create a lot of issues for artists. FBX would be a nice thing..
As you might see I think it's the artists we need to cater most to, but helping artists usually also helps new people since like artists, new people have only minor technical-knowledge.
For the programmers POV, I think focus should be on refactoring the engine to a compositional design, so that you can change by addition and not by modification, thus letting us add features without destabilizing the codebase.
But again we should make a document saying where is it most urgently needed, I've already done this for the ParticleSystem and partially for the Projectiles, but I'm sure there are other places that could benefit from being compositional.
02/12/2014 (12:48 am)
I'd say for editors, we have an internal document created by our artists at WLE with all the things they have found they need.Maybe the T3D community should do something similar?
Programmers go together and make a document saying "What features do we need, what do we want and how should they be implemented"
Artists go together and make a document saying "What features do we need, how should they be fit into the editors, what behaviour should it have?
Our internal design document has thorough description of exactly what they need the editors to do, and images that illustrates the issue and how it should be.
There is also images showing how to access the controls (e.g. they might have photo-shopped a button onto the Editor) so the programmers know what features are needed and where to put the stuff.
Apart from that I think the art pipeline is also something we shold look into, as you mentioned, as this seems to create a lot of issues for artists. FBX would be a nice thing..
As you might see I think it's the artists we need to cater most to, but helping artists usually also helps new people since like artists, new people have only minor technical-knowledge.
For the programmers POV, I think focus should be on refactoring the engine to a compositional design, so that you can change by addition and not by modification, thus letting us add features without destabilizing the codebase.
But again we should make a document saying where is it most urgently needed, I've already done this for the ParticleSystem and partially for the Projectiles, but I'm sure there are other places that could benefit from being compositional.
#6
I think the following should be addressed:
- A wiki would be immensely useful.
- Performance improvements. There's some pretty dumb code in there. I'm sure you've all seen it.
- Getting the Mac and Linux ports in a workable state is a must.
- More documentation on the shader system would be appreciated.
- The procedural shader builder is a mess, often it's difficult to figure out what is happening.
- Mission loading can be a problem since all objects are loaded at once, so if you have a content heavy mission it can block which makes things look really choppy.
- It's difficult to stick cpu intensive background tasks in threads since the console code is largely not thread safe, nor are any of the weak/strong pointer classes.
- Some features are poorly maintained (e.g. the fmod event code).
As for the current content pipeline, I don't have any problems with it myself (using blender here). However I would say applying logic to objects can be a little difficult since a lot of stuff in the engine is built around bulky all-in-one objects and the dreaded ShapeBase, though I wouldn't consider that to be part of the content pipeline.
I'd love to push back some fixes we've done but without an active development committee I can't really justify it as nobody would bother merging them in.
02/12/2014 (2:30 am)
Despite the risk of this turning into another "someone do X" thread, I'll give you my current insight based on experience...I think the following should be addressed:
- A wiki would be immensely useful.
- Performance improvements. There's some pretty dumb code in there. I'm sure you've all seen it.
- Getting the Mac and Linux ports in a workable state is a must.
- More documentation on the shader system would be appreciated.
- The procedural shader builder is a mess, often it's difficult to figure out what is happening.
- Mission loading can be a problem since all objects are loaded at once, so if you have a content heavy mission it can block which makes things look really choppy.
- It's difficult to stick cpu intensive background tasks in threads since the console code is largely not thread safe, nor are any of the weak/strong pointer classes.
- Some features are poorly maintained (e.g. the fmod event code).
As for the current content pipeline, I don't have any problems with it myself (using blender here). However I would say applying logic to objects can be a little difficult since a lot of stuff in the engine is built around bulky all-in-one objects and the dreaded ShapeBase, though I wouldn't consider that to be part of the content pipeline.
I'd love to push back some fixes we've done but without an active development committee I can't really justify it as nobody would bother merging them in.
#7
It's true that there are some dumb code in the engine, it would be useful to have a list of code that might need a loving hand (again back at the documents).
About mission loading, are you talking about streaming them? I.e. on-demand loading of objects?
02/12/2014 (2:45 am)
@James well we could just create a free wiki on something like wikia, then there would be no problem with finding someone to host it.It's true that there are some dumb code in the engine, it would be useful to have a list of code that might need a loving hand (again back at the documents).
About mission loading, are you talking about streaming them? I.e. on-demand loading of objects?
#8
Ben, I agree that usability is a very important deal. I tend to look at it from a programmer's perspective - the code side of the engine also needs a good usability pass.
Also, nobody seems to have noticed that I discovered a way to work with a GitHub wiki as a separate repository, so you can fork it, pull-request against it, etc. Even as an interim solution, that would allow us to have a more community-driven main wiki on the GitHub page.
02/12/2014 (3:37 am)
Quote:There's some pretty dumb code in there. I'm sure you've all seen it.This. I mean, the situation's not dire, the code has just lacked the attention it deserves over the years. And may I say it's great to see you back around here, James!
Quote:I'd love to push back some fixes we've done but without an active development committee I can't really justify it as nobody would bother merging them in.The fun thing about git that nobody's really taking advantage of is that GG's repo doesn't have to be T3D's one source of truth. If you submit a pull-request, I can merge your pull into my own codebase directly, regardless of what GG does with it. I totally understand that long-run usefulness relies on stuff being accepted into upstream - but I just want to get the word out that git is a lot more flexible than people are used to :). And just having the code in public as a branch or a pull-request means it can be useful to people. We just have to educate.
Ben, I agree that usability is a very important deal. I tend to look at it from a programmer's perspective - the code side of the engine also needs a good usability pass.
Also, nobody seems to have noticed that I discovered a way to work with a GitHub wiki as a separate repository, so you can fork it, pull-request against it, etc. Even as an interim solution, that would allow us to have a more community-driven main wiki on the GitHub page.
#9
I agree with 1,3, and 4. I don't know enough about opengex so I can say, but #2. I can't say I disagree but I'm kind of with Lukas on that one. I think the UI is pretty great in my opinion. One of the better ones I've worked with. Sure some of the stuff should have better documentation but I think the editors are great and focus should be in other areas.
02/12/2014 (4:02 am)
BenI agree with 1,3, and 4. I don't know enough about opengex so I can say, but #2. I can't say I disagree but I'm kind of with Lukas on that one. I think the UI is pretty great in my opinion. One of the better ones I've worked with. Sure some of the stuff should have better documentation but I think the editors are great and focus should be in other areas.
#10
I was referring to loading the mission script itself. It would be useful if the mission script could be enumerated across multiple frames so for instance objects 1,2,3 could be loaded in frame 1, and 4,5,6 could be loaded in frame 2. This would allow for progress bars and such to be updated better rather than making the whole app look like its hanging.
I did experiment with sticking everything in a background thread but the problem with that is not a lot of stuff is guaranteed to be thread safe.
Regarding the wiki, I'd suggest seeing if we could stick something on, say, wiki.torque3d.org. Although as everyone probably knows, wikis are great targets for spammers so realistically someone has to moderate it. A github wiki would also be ok... in fact the repository kind of already has one. But I think something a bit more comprehensive which isn't tied to github would be useful. Then again I think this is perhaps something which would be better off being coordinated by the committee, should it ever re-form.
02/12/2014 (4:17 am)
@LukasI was referring to loading the mission script itself. It would be useful if the mission script could be enumerated across multiple frames so for instance objects 1,2,3 could be loaded in frame 1, and 4,5,6 could be loaded in frame 2. This would allow for progress bars and such to be updated better rather than making the whole app look like its hanging.
I did experiment with sticking everything in a background thread but the problem with that is not a lot of stuff is guaranteed to be thread safe.
Regarding the wiki, I'd suggest seeing if we could stick something on, say, wiki.torque3d.org. Although as everyone probably knows, wikis are great targets for spammers so realistically someone has to moderate it. A github wiki would also be ok... in fact the repository kind of already has one. But I think something a bit more comprehensive which isn't tied to github would be useful. Then again I think this is perhaps something which would be better off being coordinated by the committee, should it ever re-form.
#11
Occlusion Culling does cull shapes before they're submitted to the graphics card, which reduces draw calls.
It's *a lot* of work.
What's wrong with the current pipeline? I'm not an artist, so I wouldn't know.
02/12/2014 (4:58 am)
Quote:
Read this thread. Edit: basically it's because occlusion culling reduces amount of primitives rendered, however it's the amount of drawcalls thats the issue.
Occlusion Culling does cull shapes before they're submitted to the graphics card, which reduces draw calls.
Quote:
What was the reason(s) for not including occlusion culling in the engine? Just curious.
It's *a lot* of work.
Quote:
A modernized Asset pipeline (OpenGEX maybe?) - http://opengex.org/
What's wrong with the current pipeline? I'm not an artist, so I wouldn't know.
#12
More advanced solutions like Umbrella gives more advanced culling but I'd guess most of it is polygons getting culled out rather than actual objects, since frustrum culling picks most of it up and is very cheap.
I believe it is because COLLADA support is fine in Blender, but in Max and Maya the support is more limited and there are more issues there.
So I guess it's only few indies that actually have these kind of problems, but I guess some people throws a bunch of money after Max...
02/12/2014 (5:37 am)
@Stefan yeah, but the Frustrum culling that T3D uses, picks most of the individual shapes up..More advanced solutions like Umbrella gives more advanced culling but I'd guess most of it is polygons getting culled out rather than actual objects, since frustrum culling picks most of it up and is very cheap.
Quote:What's wrong with the current pipeline? I'm not an artist, so I wouldn't know.Idk tbh, there is just thread after thread on the forums complaining about troubles with importing their art.
I believe it is because COLLADA support is fine in Blender, but in Max and Maya the support is more limited and there are more issues there.
So I guess it's only few indies that actually have these kind of problems, but I guess some people throws a bunch of money after Max...
#13
Maybe time to start looking at FBX or openAssimp..
02/12/2014 (5:58 am)
I never had a problem loading DAE files..I tried with multiple different software from Max, blender, DazStudio, etc..Never had an issue. The only software that I've tried that I've had problems with is sketchup. I know I found away around it but I think the DAE support is great!Maybe time to start looking at FBX or openAssimp..
#14
Umbra culls per object, and so does most occlusion solutions. Culling primitives would be too expensive (you'd have to rebuild the buffers each frame.. ew) and doesn't lower batch count.
But I agree; frustum culling is very cheap compared to occlusion culling and works most of the time in less crowded scenes. I was under the impression that T3D performed rudimentary occlusion culling by checking depth one frame behind, but I haven't been following its development in that regard so could be wrong.
Ah, that makes sense. I know the OGRE guys wanted to use COLLADA at first but then they turned around and suddenly didn't want to anymore.
I've never had any issues with it myself, though.
This is my experience too.
02/12/2014 (5:59 am)
Quote:
@Stefan yeah, but the Frustrum culling that T3D uses, picks most of the individual shapes up..
More advanced solutions like Umbrella gives more advanced culling but I'd guess most of it is polygons getting culled out rather than actual objects, since frustrum culling picks most of it up and is very cheap.
Umbra culls per object, and so does most occlusion solutions. Culling primitives would be too expensive (you'd have to rebuild the buffers each frame.. ew) and doesn't lower batch count.
But I agree; frustum culling is very cheap compared to occlusion culling and works most of the time in less crowded scenes. I was under the impression that T3D performed rudimentary occlusion culling by checking depth one frame behind, but I haven't been following its development in that regard so could be wrong.
Quote:
Idk tbh, there is just thread after thread on the forums complaining about troubles with importing their art.
I believe it is because COLLADA support is fine in Blender, but in Max and Maya the support is more limited and there are more issues there.
So I guess it's only few indies that actually have these kind of problems, but I guess some people throws a bunch of money after Max...
Ah, that makes sense. I know the OGRE guys wanted to use COLLADA at first but then they turned around and suddenly didn't want to anymore.
I've never had any issues with it myself, though.
Quote:
I never had a problem loading DAE files..I tried with multiple different software from Max, blender, DazStudio, etc..Never had an issue. The only software that I've tried that I've had problems with is sketchup. I know I found away around it but I think the DAE support is great!
Maybe time to start looking at FBX or openAssimp..
This is my experience too.
#15
I could think of:
- Instancing for other then forest objects (or I'm seeing it wrong)
- Draw call batching
- Above 2 points combined to make better use of prefabs
- Allow NULL as the lowest possible detail, instead of imposters
- Make it possible to load materials and geometry into memory during mission load and get finally rid of the T3D-lag!
Again, a lot of work!
02/12/2014 (6:06 am)
Quote:3. More Performance Improvements
I could think of:
- Instancing for other then forest objects (or I'm seeing it wrong)
- Draw call batching
- Above 2 points combined to make better use of prefabs
- Allow NULL as the lowest possible detail, instead of imposters
- Make it possible to load materials and geometry into memory during mission load and get finally rid of the T3D-lag!
Again, a lot of work!
#16
This was implemented in TGEA. Are you sure it's missing?
02/12/2014 (6:10 am)
Quote:
Draw call batching
This was implemented in TGEA. Are you sure it's missing?
#17
I'll know more when I get it working better, but I personally don't think we need per-frame occlusion culling. I think we'll get acceptable results by doing the occlusion render once every 30ms for instance, and then running off the query results from the previous query until a new one is ready. This is all speculation though, I'll know more in a few days.
02/12/2014 (6:25 am)
The downside to GPU based occlusion culling is more draw calls because it works by drawing the whole scene to a separate texture, then redrawing each object (that you want occlusion data on) while wrapping it in a begin() and end() occlusion query. DX will return the amount of visible pixels for the object drawn during the query, and then we just cull anything that has 0 pixels drawn. It's pretty simple, but as it's been pointed out this greatly increases draw calls.I'll know more when I get it working better, but I personally don't think we need per-frame occlusion culling. I think we'll get acceptable results by doing the occlusion render once every 30ms for instance, and then running off the query results from the previous query until a new one is ready. This is all speculation though, I'll know more in a few days.
#18
No, I'm not sure. It could be that's implemented in a different way, with another name, and I'm just not seeing it right. There are no docs about this as far as I know.
02/12/2014 (6:33 am)
Quote:This was implemented in TGEA. Are you sure it's missing?
No, I'm not sure. It could be that's implemented in a different way, with another name, and I'm just not seeing it right. There are no docs about this as far as I know.
#19
You don't have to draw the scene a second time. You can fetch the data from the last drawn scene and delay the occlusion data to the next frame. Many last generation AAA-games do this. See here.
Doing occlusion culling every 30ms won't work. Objects that were occluded the last frame will pop up instantly in the middle of the screen on the next frame if they move fast enough relative to the camera. Very ugly.
I really think they did it right with Frostbite 2 and their software occlusion renderer. I've been working on that for the last few months and so far it's doing great.
02/12/2014 (6:35 am)
Quote:
The downside to GPU based occlusion culling is more draw calls because it works by drawing the whole scene to a separate texture, then redrawing each object (that you want occlusion data on) while wrapping it in a begin() and end() occlusion query. DX will return the amount of visible pixels for the object drawn during the query, and then we just cull anything that has 0 pixels drawn. It's pretty simple, but as it's been pointed out this greatly increases draw calls.
You don't have to draw the scene a second time. You can fetch the data from the last drawn scene and delay the occlusion data to the next frame. Many last generation AAA-games do this. See here.
Quote:
I'll know more when I get it working better, but I personally don't think we need per-frame occlusion culling. I think we'll get acceptable results by doing the occlusion render once every 30ms for instance, and then running off the query results from the previous query until a new one is ready. This is all speculation though, I'll know more in a few days.
Doing occlusion culling every 30ms won't work. Objects that were occluded the last frame will pop up instantly in the middle of the screen on the next frame if they move fast enough relative to the camera. Very ugly.
I really think they did it right with Frostbite 2 and their software occlusion renderer. I've been working on that for the last few months and so far it's doing great.
#20
02/12/2014 (6:46 am)
The whole time I was doing it I kept thinking "this is stupid, why I am redrawing all this data when the scene has already been drawn" but that's all I could find in samples/tutorials. They all render to a smaller texture (512x256 in most examples) and work off that. Thanks for the link though, that clears a lot of things up. I'll try that approach again when I get home.
Torque Owner Lukas Joergensen
WinterLeaf Entertainment
It would be great to have an official list of tutorials like this one tho so noobs have a place to go to.
2. This would be nice but, to me, it's not really that urgent.. T3D has the UI and controls I liked the most out of all the engines I have tried because it's simple, accessible and doesn't take any special knowledge of controls etc. It's simply the same controls as those you use for your game.
3. This is pretty unnecessary.. It would be nice but most of the performance improvements we can do are overrated and not necessary for releasing a game.
E.g. the occlusion culling thing has been brought up over and over, and every time it has been dismissed and typically with good reason..
I'm not saying that performance improvements wouldn't be a good thing, it's simply not "critical".
4. Meh this is a pretty generic statement, you'd have to specify for each tool.. Just having "more" tools is typically a bad thing, the less tools the better, however one tool might make a huge difference in production time and then it would make sense to have it.
5. OpenGEX doesn't support Blender, and relying support on some new unknown format, only really lasts untill that format dies. It's risky and might suddenly set us way back.
FBX would make sense since it seems it have come to stay, but no one fixed the FBX Importer that was released by Chris Calef / BrokeAss Games. So I guess it's not a huge priority within the community.