Game Development Community

Bad Nvidia GeForce Go problem with Torque3D

by Nmuta Jones · in Torque 3D Professional · 01/30/2010 (10:41 am) · 44 replies

I'm having a very bad problem on a notebook with GeForce Go 7150m trying to run Torque3D 1.0.1

Even the EMPTY template is very, very slow.... probably about 7 fps. Very choppy.

I just updated the drivers. No change.

system specs
2008 HP Pavillion laptop (dv6000 model) . its only 2 years old.
2 Gigs DDR2 RAM
Nvidia GeForce Go 7150m with 793 MB shared video memory available
Dual Core AMD Athlon X2
DirectX 10

this same laptop runs TGE fine, very fast. And can play most retail commercial PC games.

I updated the drivers , like I said, but still no luck.
If it can't even play the "EMPTY" demo without slowdown, I imagine its a card issue. I have more than enough VRAM.

This is a very popular series of laptops. I'm sure that Best Buy has sold tens of thousands of them. I went to visit my brother last year and found out that he has the exact same machine and we never even compared notes before we both bought laptops.

This issue alone may probably cause me to have to revert back to TGE for now. My other two laptops run T3D perfectly fine (and they are even older machines) , but I'm afraid that if I release a T3D game , anyone with my laptop will simply not be able to play it, even if they update their drivers.



#21
02/01/2010 (3:12 pm)
a) Is not really a realistic approach. With TGEA potentially but already that is hairy. But from T3D its basically redo all the work.


You better would go the other way: TGE + Modernization Kit
#22
02/01/2010 (3:14 pm)
Modernization kit? wow.... just finished looking at it. Although MK seems to open up a whole can of worms.

I only have about four or five main engine changes. the rest is all script. I think option "a" still works for me..... a lot of the scripting stuff in T3D is very similar to TGE.... some of the scripts I could copy right over with a few tweaks.
#23
02/02/2010 (9:13 am)
Having used TGE and T3D extensively, I can tell porting a game between them might be a though affair, even if your game is made only of scripts. There are much more changes than merely the way graphics are rendered, like sound, asset handling, the editors, etc. You'd have nearly as much work as making a Xbox 360 game and porting it to the Wii.

And quite frankly, TGEA 1.8 actually performs better than TGE in shitty integrated cards if you force it to run in shader model 1.1 and respect those cards' polygon and texture throughput. Direct3D also plays nicer with Vista and depending on the situation you might get 3D acceleration with the out-of-the-box drivers.

So, I suggest you go with TGEA 1.8, since I find it can scale down better than T3D. But there is something you can try with T3D: force the shadergen to generate Shader Model 2.0 shaders. Low end 6xxx and 7xxx cards are quite terrible at SM 3.0, but I found they run 2.0 shaders quite well. Same applies to Intel's SM 3.0 and SM 4.0 cards - stick to SM 2.0 and you can use normal mapping and whatnot at great framerates.

Just add these to your prefs.cs:
$pref::Video::forcePixVersion = true;
$pref::Video::forcedPixVersion = 2.0;
#24
02/02/2010 (2:22 pm)
Wow Manoel! I need to talk with you further.

I tried

$pref::Video::forcePixVersion = true;
$pref::Video::forcedPixVersion = 2.0;

on T3D on a mid level machine T3D and it TRIPLED my frame rate.

The only problem is that it made my character disappear in 3rd person view! I can still see Gideon's shadow but not Gideon. But that should not be a problem,hopefully, since it seems to only indicate that I need to make sure I don't use a character that has advanced shader effects enabled.

Thanks again.



#25
02/02/2010 (9:41 pm)
Yeah I tried those same two lines in another laptop
$pref::Video::forcePixVersion = true;
$pref::Video::forcedPixVersion = 2.0;

and the same result: higher frame rate but Gideon disappears.

#26
02/02/2010 (9:48 pm)
check the shaders, perhaps one thats pure SM3 is involved
#27
02/02/2010 (10:17 pm)
I'm having some success over here.

I have now acheived almost the same frame rate I got in my TGE demo , on my crap card using T3D.

The key is FORCING Pixel Version 2 as Manoel suggested.

As far as I'm concerned, I could care less about the Gideon character since I won't be using him anyway. I put in another character and its all working at much better frame rates.



#28
02/03/2010 (9:21 am)
There are a few bugs with the shader gen creating shaders that are a bit too long for SM 2.0. The problem features are detail normal map and specular map: remove them and the materials will compile.

Another solution is forcing SM 2.0a in cards that support SM 3.0 or higher, since that mode has a few more instructions and should allow complex materials.

Buuuuuut, I just found out it's not possible to force 2.0a in Torque because $pref::Video::forcedPixVersion is a float, and the target names for 2.0a are "vs_2_x" and "ps_2_x". But you can change this in GFXD3D9Shader::_init() in gfxD3D9Shader.cpp by checking if the value is greater than 2.0 and smaller than 3.0 and force the target names to "vs_2_x" and "ps_2_x".

Also, you can automatically force shader 2.0 into certain video cards using the profile scripts. T3D checks for scripts corresponding to the current GPU in there and executes them so you can perform custom code or force certain prefs for specific video cards, of for all video cards from the same vendor.

Example: you have a GeForce Go 7100. Torque will look for these scripts in the profile folder, in order, and execute the first one it finds:

D3D9.NVIDIA.GeForceGo7100.cs
D3D9.NVIDIA.cs
D3D9.cs

Since there might be tons of Geforce 7xxx variants, you can have code in the D3D9.NVIDIA.cs that gets the card name (using GFXCardProfiler::getCard()), parses it to check if it's a 7000 series, then checks the last 3 digits to determine how shitty it is.

Example: The card name is "GeForce Go 7150". You go through each word looking for one that starts with a "7", then you subtract 7000 from it (Torque automatically converts strings to numbers if you do math on them) and if the result is smaller than 800, force shader 2.0. This will force shader 2.0 on all 7xxx cards lower than a 7800.

And, of course, create an D3D9.Intel.cs script and force shader 2.0 for all cards that claim to support higher shaders.
#29
02/04/2010 (5:51 am)
@Manoel:

The problem is that I own a TGE license, and T3D, but not TGEA 1.8.

And they don't sell TGEA 1.8 anymore so I'm stuck. I agree, TGEA is probably the best bet for this particular project.

#30
02/04/2010 (6:14 am)
TGEa would basically come with a T3D Pro license under previous versions in the download.

But if you have the non source then not ...
#31
02/05/2010 (1:50 am)
@Manoel

You know what: you're right. I just tried a demo built with TGEA and TGEA actually gets better frame rates on my craptop then TGE does.

But I can't afford the Pro license right now, so I'll stick with T3D binary and TGE for now.

#32
02/05/2010 (2:05 am)
If you're working with the binary, then porting between TGE and T3D should be a non-issue. Even if you had code involvement, I don't see that as a serious porting issue.

Most scripts and GUIs work as expected when moving from TGE to T3D, though there are a few changes to function calls and object defs (audio datablocks are now SFXProfile for some reason)

You'd probably be better off porting from TGE up to T3D, not the other way around, since T3D has object types that don't exist in TGE, but it's rarely true the other way around. T3D uses a different directory structure, but it's a non-issue. There's still a server scripts directory, client scripts directory, art directory, terrains directory, etc. They're just in slightly different locations. And the datablocks have been moved out of server scripts and into art/datablocks, but you absolutely do not have to follow this convention if you find it confusing to remove your datablocks from your server scripts. If you do use it, it's basically like .H vs. .CC/CPP files; datablocks are your headers, and their functions are your CPP files.

It seems like you have a really oddly wide target demo, with lots of users who have ancient cards but another segment who complain about last-gen graphics. Not saying it's a bad thing (the wider your demo, the more users you'll have), I'm just surprised that you have 1 project that people expect to both run on old hardware and have modern shader effects. Anyway, if I were you I'd build it for TGE and then port it, shine it up a bit, and offer one DL as the "HD" version. People love that term these days. ;)
#33
02/05/2010 (3:01 am)
@Henry

The issue is that most people don't know what hardware they have, they just expect it to work.

Unfortunately, when people think "video games", they think: CONSOLE, which is a uniform hardware and all users will have the same experience and frame rate within the realm of that one console for any given game.

But yes, my target market is pretty wide, it's based on age, primarily...this game targets grade and middle school males between ages 8 and 14. But that group doesn't do much purchasing...parents purchase for them, and that's where the ignorance lies.... "Can I just put this in my computer and it will work?"....that's the attitude.

I like your advice, and I'm going to follow it.... starting in TGE and then porting "up". There are a few source changes I can't port over, but they are mostly things that have already been hard coded into T3D like swimming and jetting.







#34
02/11/2010 (2:08 pm)
@Nmuta

Our goal is to ensure that T3D can run on just about any shader capable card even the low end Intel GMAs.

In 1.1 beta which was just released has a bunch of new quality preferences for lowering the performance requirements of T3D.

During beta we'll be tweaking those adjustments and fixing bugs like the long shaders when in pure SM 2.0 mode.
#35
02/11/2010 (2:21 pm)
@ Tom

First of all, I can't tell you how refreshing it's been since I came to the Torque community.... you can have conversations about hardware related issues with adults who actually understand that "mass market hardware" is a reality.

Before I came to Torque, I would mention mass market hardware and people would respond with this arrogance as if those users never even existed. They seemed to think that everyone was running alienware with a terabyte of dedicated VRAM and that most other people were doing so, and that people who were NOT doing so must be running Apple II systems from the 1980s and screw them anyway.

Torque has ALWAYS , always, in my opinion, had the most brilliant and optimized graphics and the BEST performance by far on virtually any machine.

Other devlopment licenses I've held and used are Shockwave3D (Director...terrible, I know)... Quest3D, Virtools educational, and Blitz3D.

Of all of them, Torque wins hands down in almost every conceivable category that is important to me. And now that T3D runs in a browser, its really got everything within my particular set of desired features.


With all of that said, I know its a tremendous challenge to keep optimizing T3D for crappy mass market laptop cards.

But to the degree that you are able, it would really help maintain Torque's tradition of combining mass market accessibility with the option of also producing amazing graphical output.

Much respect to all the work you guys have done, and yes, I do eventually plan to upgrade to Pro, within the next few months.

I just realized that I have Binary 1.0.1, so I will check out the new 1.1

thanks!









#36
02/11/2010 (2:38 pm)
Quote:With all of that said, I know its a tremendous challenge to keep optimizing T3D for crappy mass market laptop cards.
The approach we have taken to do that is using Basic Lighting and cutting more advanced shader features. In 1.1 you can disable parallax, specular, and even normal mapping by changing a $pref. This is what the 'Shader Quality' setting does under the hood and its completely configurable for whatever trade offs you want to make on your own game.
#37
02/11/2010 (2:39 pm)
@Tom

I have binary 1.0.1
What are those $prefs?

when you say 1.1, you mean 1.0.1, right?
#38
02/11/2010 (2:42 pm)
Lowering down to SM2 tends to help, but as of the alpha there was some snags with some shaders going over some limits. Hopefully it gets fixed up because it pretty much kills those video cards when the shader is outputting errors by the hundreds.

You guys might consider putting SM2 options into the option panel as well to help out with those cards that officially support SM3, but kill over when just thinking about it.

I did try it in Burg with my decent video card and I went from 300FPS to 500FPS in basic :-). Pretty much we look at BL as whatever it takes to get some piece of crap to render the scene. Don't care if it looks like a pile of garbage.
#39
02/11/2010 (2:46 pm)
@Joshua

you mean "keel" over?
#40
02/11/2010 (2:49 pm)
I think it literally kills over :-). But yeah, keel would be the right term :-). The new options in 1.1b in conjuction with some stuff we added along with SM2 hopefully working properly in the future should allow for some pretty aggressive targetting of low end junkers I am hoping at least :-).