Some details on the high level shading languages
by Josh Williams · in Technical Issues · 03/24/2004 (1:15 am) · 14 replies
While reading through Brian's TSE thread, I decided to put together a post for folks who are interested in learning about the shading languages that are in wide commercial game use. I was originally going to post this in Brian's thread, but I ended up rambling on for quite a bit. I guess it'd be better to start a new thread than to clutter up that one.
The below is the pasted-in text I was going to post over there. Maybe somebody will find this interesting. I am busy with documentation work, but if anyone wants more details or has questions or anything, just post and say so.
---------
The whole shading language thing seems to be getting quite a bit of attention in this thread. I've spent a lot of time working with each of the languages, so in case anyone's interested in seeing a comparison, I'll chime in.
Here's a run-down of each language:
Short Description of Cg
Language:
Cg is a C-like language that is syntactically and semantically almost identical to HLSL (see HLSL section for some differences).
-Cg language features:
Compiling:
-The compile methodology with Cg differs from that associated with HLSL. Cg is compiled to hardware-profile-specific code. As has been pointed out already, this means that a Cg compiler takes a Cg shader program and compiles for a specified hardware profile, using ARB or NV paths. The user can choose which hardware profiles to compile for.
-Cg is nice in that it is not dependent on any API, platform, or hardware. It is not nice in that nVidia technically controls its specification. As such, when it comes to hardware-specific optimizations you can bet, at least, that the Cg compilers supplied by nVidia will usually generate faster code on nVidia hardware than on any other.
The below is the pasted-in text I was going to post over there. Maybe somebody will find this interesting. I am busy with documentation work, but if anyone wants more details or has questions or anything, just post and say so.
---------
The whole shading language thing seems to be getting quite a bit of attention in this thread. I've spent a lot of time working with each of the languages, so in case anyone's interested in seeing a comparison, I'll chime in.
Here's a run-down of each language:
Short Description of Cg
Language:
Cg is a C-like language that is syntactically and semantically almost identical to HLSL (see HLSL section for some differences).
-Cg language features:
-Built-in support for many useful types including boolean,
scalars, and vectors and matrices of many sizes. Data
types in Cg have no pre-defined spec for range and
precision. The Cg compiler binds these attributes based
on the specified hardware profile (see below). Cg supports
a fixed-point, a clamp and many other special data types.
Cg also supports "packed" data-types. Cg has many built-in
type constructors as well. Some type-casting support
-Variables may be declared anywhere they can be in C++.
Uninitialized variables are not supported
-Roughly supports consts
-Provides the standard arithmetic operators, such that most
any combination of scalars, vectors, and matrices can be
correctly used with standard operators. Cg supports the
standard boolean and logical operators
-Many C control-flow keywords are supported, including if,
while, for, and do
-Offers pretty decent array support, but restrictions apply
to size, dimensionality, and subscript indexing
-Offers an easy-to-use swizzling syntax, some neat bitmask
stuff
-Supports function overloading :)
-Supports C and C++-style comments, and also supports many
standard C pre-processor commands
-Has a large pre-defined library with support for many, many
common graphics functions. Of course, there is no library
or keyword support for things like strings
-Does not support switch, operator overloading, enums,
pointers, recursive functions, classes, templates,
namespaces, or exception-handlers. Still, Cg pretty much
reserves every C and C++ reserved word, so it may grow to
support these things in the future
-Please note: this is not meant to be a complete reference :)
-Also note: the fact that the base Cg spec supports a
specific language feature does not imply that a shader
which utilizes that feature will compile. This can happen
if the hardware profile provided to the compiler indicates
that the target hardware has no support for the specified
featureCompiling:
-The compile methodology with Cg differs from that associated with HLSL. Cg is compiled to hardware-profile-specific code. As has been pointed out already, this means that a Cg compiler takes a Cg shader program and compiles for a specified hardware profile, using ARB or NV paths. The user can choose which hardware profiles to compile for.
-Cg is nice in that it is not dependent on any API, platform, or hardware. It is not nice in that nVidia technically controls its specification. As such, when it comes to hardware-specific optimizations you can bet, at least, that the Cg compilers supplied by nVidia will usually generate faster code on nVidia hardware than on any other.
#2
Each of these languages share fairly similar syntax. What really distinguishes them is their compile methodologies.
In my opinion, GLslang is head and shoulders above Cg and HLSL in this regard. GLslang directly targets hardware. That doesn't make much of a difference in terms of how shader programs are coded, but it can make a huge difference in how fast they will run. By specifying that shader program compiling take place in the OpenGL driver, GLslang ensures that IHVs have the best opportunity to implement optimizations.
Whereas GLslang compilation directly targets hardwire, HLSL targets DX9 assembly languages. As such, IHV drivers never get the opportunity to optimize the high-level code. These approaches imply two major differences:
1) With HLSL, shaders lose out on potential automatic, high-level, hardware-specific optimizations. Since shaders are translated to assembly, high-level semantics may be lost before optimization can occur. Any programmer experienced in optimization work will know that this can be very bad.
2) Shaders are forced to live with the limits of assembly language specifications, rather than direct reliance on hardware. So, say an nVidia product supports some fancy-pants feature that an ATI product does not. Also, say our shader could benefit from the use of this fancy-pants feature. With HLSL, there is a lower chance our code will automatically take advantage of the feature in question, since HLSL first compiles to assembly and that is all that gets passed on to hardware-specific drivers.
With GLslang, the driver itself would take the high-level shader code, and could detect that the code would benefit from use of the fancy-pants optimization. Please note that this is not a theoretical difference, it has many practical implications. One such practical example is the support of swizzles on (earlier) nVidia vs ATI cards. HLSL is (or was) forced to compile assembly code using the slower, more common swizzling methods.
In terms of compiling, Cg lies between HLSL and GLslang. HLSL gets translated to DX9 assembly; Cg gets compiled to ARB and NV specific code. The Cg compiler is responsible for generating assembly code specific to a particular hardware profile and this can be done either at dev time, or with a run-time library. The basic idea offers more flexibility than DX/HLSL, but GLslang offers the same benefits (and more), with much, much more elegance and opportunity for optimization.
So, GLslang wins?
Well, GLslang's approach requires that IHVs implement good optimizers and compilers directly in their drivers. Obviously, this increases the cost of driver development, as compared to what HLSL or Cg requires of drivers. However, this requirement also opens up driver implementation to much more innovation and competition. In the end, if the same shader is written in both HLSL, Cg, and GLslang, I believe that the GLslang version will typically run more quickly (at least once a particular piece of hardware's drivers have gone through a couple iterations ;)
Also, GLslang's approach is a bit less consistent than both HLSL and Cg's. Performance of GLslang shaders on a particular piece of hardware could vary wildly with each driver release. With HLSL, performance should vary less, as the generated assembly code won't change except when the HLSL->assembler is updated. With Cg, a similar situation applies: generated code will only change when a Cg compiler is updated and shaders are re-compiled.
03/24/2004 (1:18 am)
Comparison and Analysis of Compile MethodsEach of these languages share fairly similar syntax. What really distinguishes them is their compile methodologies.
In my opinion, GLslang is head and shoulders above Cg and HLSL in this regard. GLslang directly targets hardware. That doesn't make much of a difference in terms of how shader programs are coded, but it can make a huge difference in how fast they will run. By specifying that shader program compiling take place in the OpenGL driver, GLslang ensures that IHVs have the best opportunity to implement optimizations.
Whereas GLslang compilation directly targets hardwire, HLSL targets DX9 assembly languages. As such, IHV drivers never get the opportunity to optimize the high-level code. These approaches imply two major differences:
1) With HLSL, shaders lose out on potential automatic, high-level, hardware-specific optimizations. Since shaders are translated to assembly, high-level semantics may be lost before optimization can occur. Any programmer experienced in optimization work will know that this can be very bad.
2) Shaders are forced to live with the limits of assembly language specifications, rather than direct reliance on hardware. So, say an nVidia product supports some fancy-pants feature that an ATI product does not. Also, say our shader could benefit from the use of this fancy-pants feature. With HLSL, there is a lower chance our code will automatically take advantage of the feature in question, since HLSL first compiles to assembly and that is all that gets passed on to hardware-specific drivers.
With GLslang, the driver itself would take the high-level shader code, and could detect that the code would benefit from use of the fancy-pants optimization. Please note that this is not a theoretical difference, it has many practical implications. One such practical example is the support of swizzles on (earlier) nVidia vs ATI cards. HLSL is (or was) forced to compile assembly code using the slower, more common swizzling methods.
In terms of compiling, Cg lies between HLSL and GLslang. HLSL gets translated to DX9 assembly; Cg gets compiled to ARB and NV specific code. The Cg compiler is responsible for generating assembly code specific to a particular hardware profile and this can be done either at dev time, or with a run-time library. The basic idea offers more flexibility than DX/HLSL, but GLslang offers the same benefits (and more), with much, much more elegance and opportunity for optimization.
So, GLslang wins?
Well, GLslang's approach requires that IHVs implement good optimizers and compilers directly in their drivers. Obviously, this increases the cost of driver development, as compared to what HLSL or Cg requires of drivers. However, this requirement also opens up driver implementation to much more innovation and competition. In the end, if the same shader is written in both HLSL, Cg, and GLslang, I believe that the GLslang version will typically run more quickly (at least once a particular piece of hardware's drivers have gone through a couple iterations ;)
Also, GLslang's approach is a bit less consistent than both HLSL and Cg's. Performance of GLslang shaders on a particular piece of hardware could vary wildly with each driver release. With HLSL, performance should vary less, as the generated assembly code won't change except when the HLSL->assembler is updated. With Cg, a similar situation applies: generated code will only change when a Cg compiler is updated and shaders are re-compiled.
#3
Overall, I think GLslang takes the best approach. Cg is ok, but if nVidia is the only one providing compilers, it's obvious that nVidia hardware will have better performance. At the same time, if other IHVs start providing Cg compilers.. to take advantage of all possible compiler optimizations, you'd have to compile your code on each compiler. Yuck!
If each vendor were actually to offer a Cg compiler, things would get messy. Besides, those vendor-supplied compilers might as well just be a part of their drivers. Enter Glslang. :)
Language feature-wise, Cg and HLSL are identical, and GLslang is pretty similar too, but I think it has a couple advantages. Language syntax-wise, GLslang is a bit uglier than the others, but it offers a few advantages here as well.
Comparison and Analysis of Languages
GLslang's semantic design has many practical development advantages over HLSL and Cg.
GLslang abstracts hardware at a higher level than HLSL/Cg. Unlike in Cg/HLSL, vertex and fragment shaders in GLslang are tightly coupled. This might seem like a poor restriction, but it has very little practical negative impact, and offers many benefits.
One such benefit is that semantic assignments to shared shader parameters are not as repetitive and bothersome as in Cg/HLSL. This approach also offers more opportunity for optimization. For example, say we poorly code a program in which a vertex shader changes a fragment parameter value, but a fragment shader never uses that value. With GLslang the driver could detect this and optimize the vertex shader. Since HLSL/Cg de-couple vertex and fragment shaders, this sort of optimization is not possible.
Clearly, with GLslang, drivers will have a high bar of responsibility (and opportunity) when it comes to optimizing performance for shaders. Developers will have to take some care too. At the time I last studied it in detail (which was just before the final spec was released) GLslang had no method by which developers could detect certain shader limitations. For example, there is no way to detect how many registers your code has to work with. So, if your code is written in such a way that it requires more registers than are available in a single-pass, the driver will have to do some tricky stuff. (like executing code in a multi-pass fashion, or going into software mode.. hehe)
I think people who whine about this are wheenies, though. :)
Going with a limitless resource model, as in GLslang, makes a language more elegant and it only requires a bit more care on the part of developers. To me, the advantages, both in practical and idealistic terms, far outweigh the disadvantages. Also, IHVs can innovate around this problem, and the newest cards already alleviate much of the potential trouble in this area.
Shutting Up
Okay, that's my spiel. Hope it was interesting to anybody who likes learning about this junk. Back to work on the TGE docos.
03/24/2004 (1:20 am)
[.... continued from previous section... stupid forum character limit... it's like they don't want people droning on and on about boring subjects... sheesh...]Overall, I think GLslang takes the best approach. Cg is ok, but if nVidia is the only one providing compilers, it's obvious that nVidia hardware will have better performance. At the same time, if other IHVs start providing Cg compilers.. to take advantage of all possible compiler optimizations, you'd have to compile your code on each compiler. Yuck!
If each vendor were actually to offer a Cg compiler, things would get messy. Besides, those vendor-supplied compilers might as well just be a part of their drivers. Enter Glslang. :)
Language feature-wise, Cg and HLSL are identical, and GLslang is pretty similar too, but I think it has a couple advantages. Language syntax-wise, GLslang is a bit uglier than the others, but it offers a few advantages here as well.
Comparison and Analysis of Languages
GLslang's semantic design has many practical development advantages over HLSL and Cg.
GLslang abstracts hardware at a higher level than HLSL/Cg. Unlike in Cg/HLSL, vertex and fragment shaders in GLslang are tightly coupled. This might seem like a poor restriction, but it has very little practical negative impact, and offers many benefits.
One such benefit is that semantic assignments to shared shader parameters are not as repetitive and bothersome as in Cg/HLSL. This approach also offers more opportunity for optimization. For example, say we poorly code a program in which a vertex shader changes a fragment parameter value, but a fragment shader never uses that value. With GLslang the driver could detect this and optimize the vertex shader. Since HLSL/Cg de-couple vertex and fragment shaders, this sort of optimization is not possible.
Clearly, with GLslang, drivers will have a high bar of responsibility (and opportunity) when it comes to optimizing performance for shaders. Developers will have to take some care too. At the time I last studied it in detail (which was just before the final spec was released) GLslang had no method by which developers could detect certain shader limitations. For example, there is no way to detect how many registers your code has to work with. So, if your code is written in such a way that it requires more registers than are available in a single-pass, the driver will have to do some tricky stuff. (like executing code in a multi-pass fashion, or going into software mode.. hehe)
I think people who whine about this are wheenies, though. :)
Going with a limitless resource model, as in GLslang, makes a language more elegant and it only requires a bit more care on the part of developers. To me, the advantages, both in practical and idealistic terms, far outweigh the disadvantages. Also, IHVs can innovate around this problem, and the newest cards already alleviate much of the potential trouble in this area.
Shutting Up
Okay, that's my spiel. Hope it was interesting to anybody who likes learning about this junk. Back to work on the TGE docos.
#4
03/24/2004 (2:43 am)
Thank you very much. :D
#5
Thanks for the explanation but it seems to have lead me to the conclution that TSE should be using GLSlang. Can you guess what question is coming next. As far as I am aware GLSlang has been approaved in its current state and has been implemented in GL 1.5. Did you guys start working on this before GLSlang was approaved or was there a particular reason for starting with a dx version.
Thanks, Ben
03/24/2004 (5:12 am)
Hello Josh, Thanks for the explanation but it seems to have lead me to the conclution that TSE should be using GLSlang. Can you guess what question is coming next. As far as I am aware GLSlang has been approaved in its current state and has been implemented in GL 1.5. Did you guys start working on this before GLSlang was approaved or was there a particular reason for starting with a dx version.
Thanks, Ben
#6
Now where does all this stand in regards to TSE?
03/24/2004 (5:19 am)
Josh, very interesting read. Thank you for the time it took to write it up.Now where does all this stand in regards to TSE?
#7
The above really shouldn't reflect on TSE much at all. Indeed, as Brian has said, TSE was designed with full-on OGL support in mind. And that support is forth-coming. As has also been stated, a large percentage of GG's game and license sales go to non-windows platforms. That makes OGL support essential. Besides the sales thing, GG just plain likes the idea of supporting open, platform-agnostic technologies... they offer more choice and freedom.
So, it is in both GarageGames' financial and idealistic interest to end up with rockin OGL support. As such, you can bet that's how things will end up. :)
To start, it seems going with DX/HLSL was pretty much the only possible choice. OGL2 and GLSlang just plain took too long to be finalized, and GG couldn't wait around for things to get ironed out. DX9/HLSL has been stable for quite a long time. So, you can see why DX9 got the earlier treatment.
By the end though, I'd bet that if TSE were somehow forced to slightly favor either DX or OGL in certain ways, it will favor OGL. As for GLslang, I am not in charge of deciding how the OGL support will be implemented. GLslang would make sense, but there are other options.
Again, I'm just an intern so I basically have no official say in anything :) Still, if you put a gun to my head and forced me to guess, I would bet that TSE will end up supporting GLslang, and that the GLslang support will kick much ass.
03/24/2004 (9:25 am)
Yes, TSE was being worked on long before GLslang/OGL2.0 met final approve.The above really shouldn't reflect on TSE much at all. Indeed, as Brian has said, TSE was designed with full-on OGL support in mind. And that support is forth-coming. As has also been stated, a large percentage of GG's game and license sales go to non-windows platforms. That makes OGL support essential. Besides the sales thing, GG just plain likes the idea of supporting open, platform-agnostic technologies... they offer more choice and freedom.
So, it is in both GarageGames' financial and idealistic interest to end up with rockin OGL support. As such, you can bet that's how things will end up. :)
To start, it seems going with DX/HLSL was pretty much the only possible choice. OGL2 and GLSlang just plain took too long to be finalized, and GG couldn't wait around for things to get ironed out. DX9/HLSL has been stable for quite a long time. So, you can see why DX9 got the earlier treatment.
By the end though, I'd bet that if TSE were somehow forced to slightly favor either DX or OGL in certain ways, it will favor OGL. As for GLslang, I am not in charge of deciding how the OGL support will be implemented. GLslang would make sense, but there are other options.
Again, I'm just an intern so I basically have no official say in anything :) Still, if you put a gun to my head and forced me to guess, I would bet that TSE will end up supporting GLslang, and that the GLslang support will kick much ass.
#8
We will be very likely focusing on GLSL for the OpenGL implementation. The scary thing about it is that it is brand new. HLSL has been around for a while now, and the compiler still has bugs in it. I think it will be a while before GLSL is going to be firing on all cylinders.
One thing that's actually nice about HLSL compiling to DX asm is that you can look at the assembly output of your shader. That really helps to identify compiler problems and learn how it compiles certain things down. It also will tell you how many instructions it uses which is very useful.
We won't be ignoring DX in the future just because OpenGL runs on anything. The reason is that for PC's running Windows, DX is better implemented across a broad range of hardware.
03/24/2004 (10:04 am)
Wow, lot of interesting info in there Josh!We will be very likely focusing on GLSL for the OpenGL implementation. The scary thing about it is that it is brand new. HLSL has been around for a while now, and the compiler still has bugs in it. I think it will be a while before GLSL is going to be firing on all cylinders.
One thing that's actually nice about HLSL compiling to DX asm is that you can look at the assembly output of your shader. That really helps to identify compiler problems and learn how it compiles certain things down. It also will tell you how many instructions it uses which is very useful.
We won't be ignoring DX in the future just because OpenGL runs on anything. The reason is that for PC's running Windows, DX is better implemented across a broad range of hardware.
#9
Yeah, with GLSL in the short-term, I'm sure it will take a while for IHVs to get good compilers in their drivers. Although much work has been done on this already, it's a pretty major thing to get right.
I agree with your comment on DX compilation. It is neat and useful to see the assembly generated from higher-level code. From the work I've seen from nVidia and ATI, I think their debug and optimization support for GLslang will be very useful as well.
Also on DX, I didn't mean to imply that GG might be thinking of ignoring it in the future or anything. :) It just looked like folks were getting concerned about TSE and OGL support, so I wanted to reinforce that you've said TSE is designed to offer strong support for both OGL and DX.
To everybody else, I posted this in "General Technical Issues -> Graphics" section for a reason. :) It's just some general information on the languages; doesn't really have much to do with TSE specifically.
03/24/2004 (10:23 am)
Thanks Brian. Yeah, with GLSL in the short-term, I'm sure it will take a while for IHVs to get good compilers in their drivers. Although much work has been done on this already, it's a pretty major thing to get right.
I agree with your comment on DX compilation. It is neat and useful to see the assembly generated from higher-level code. From the work I've seen from nVidia and ATI, I think their debug and optimization support for GLslang will be very useful as well.
Also on DX, I didn't mean to imply that GG might be thinking of ignoring it in the future or anything. :) It just looked like folks were getting concerned about TSE and OGL support, so I wanted to reinforce that you've said TSE is designed to offer strong support for both OGL and DX.
To everybody else, I posted this in "General Technical Issues -> Graphics" section for a reason. :) It's just some general information on the languages; doesn't really have much to do with TSE specifically.
#10
I had just downloaded 1.0 a couple of days ago--That program is too cool. ;-)
What tool does GG use?
03/24/2004 (2:43 pm)
This month's Game Developer has a pretty interesting article on ATI's Rendermonky 1.5, which is apparently being shown @ the GDC this week. It's supposed to be at least OGL 1.4 compliant. It sounded like ATI will be rolling out some new drivers that or GLSL too.I had just downloaded 1.0 a couple of days ago--That program is too cool. ;-)
What tool does GG use?
#11
Ya, there was a news article on ati's site. They are saying that they will be releasing the next version by the end of the month and it will support glslang. Looks interesting. I am planning on playing with that as soon as possible.
Later, Ben
03/25/2004 (4:22 am)
Hello Eric, Ya, there was a news article on ati's site. They are saying that they will be releasing the next version by the end of the month and it will support glslang. Looks interesting. I am planning on playing with that as soon as possible.
Later, Ben
#12
Although the compiler seems to need some bugfixes you can already use it.
Simple shaders are no problem.
I had problems with more complex ones, when using array lookups.
But for testing and playing around, you can use it.
03/25/2004 (4:27 am)
The current version of the Catalyst 4.2 drivers already has GLSlang support.Although the compiler seems to need some bugfixes you can already use it.
Simple shaders are no problem.
I had problems with more complex ones, when using array lookups.
But for testing and playing around, you can use it.
#13
Major Geek Spooge !!! Xp
The slides of the seminar should be available on 3dlabs website anytime now. I will also write a little something on the topic...
Now, if you want to play with GLSL right now, in a GUI environment, fear not, a spanish dude comes to the rescue :
Go get Shader Designer here
Be warned that when I downloaded it tuesday night, all the project/workspaces were corrupted. So you have to load the included shaders by hand, and retie them to textures, etc.
An email to the author might shed some light. I'm writing one right after this :)
GLSL rocks my world !! ;)
P.S. : if you're using an nvidia card, you'll either need access to the registered dev part of the website to download the little app (emulate.exe) that enables GLSL in all the 55 and up nvidia drivers. ATi, not sure...
You might find this app on some enthusiast sites.
I can't pass it around, EULA/NDA, etc.
03/25/2004 (7:31 am)
I had the pleasure of seeing RenderMonkey GLSL in action Tuesday at the 3dlabs GLSL seminar here in Montreal.Major Geek Spooge !!! Xp
The slides of the seminar should be available on 3dlabs website anytime now. I will also write a little something on the topic...
Now, if you want to play with GLSL right now, in a GUI environment, fear not, a spanish dude comes to the rescue :
Go get Shader Designer here
Be warned that when I downloaded it tuesday night, all the project/workspaces were corrupted. So you have to load the included shaders by hand, and retie them to textures, etc.
An email to the author might shed some light. I'm writing one right after this :)
GLSL rocks my world !! ;)
P.S. : if you're using an nvidia card, you'll either need access to the registered dev part of the website to download the little app (emulate.exe) that enables GLSL in all the 55 and up nvidia drivers. ATi, not sure...
You might find this app on some enthusiast sites.
I can't pass it around, EULA/NDA, etc.
#14
The drivers are another issue :)
Interesting info, Nicolas. I'm glad nVidia has been working on GLSL support in the background too.
Eric, to answer your question, I'm really not sure what GG uses for shader stuff. There's no standard, as far as I know. On the art side, I think it's just Max/Maya w/ plugins and Show. On the programming side, probably just VC, maybe the DX effect edit utility or something. But you'd have to ask Brian to know for sure.
03/27/2004 (12:35 pm)
Yeah Rendermonkey is very slick. Can't wait to test out the GLslang support. I know ATI and 3DLabs have been working on adding the support to RM for over a year, so it should be good to go.The drivers are another issue :)
Interesting info, Nicolas. I'm glad nVidia has been working on GLSL support in the background too.
Eric, to answer your question, I'm really not sure what GG uses for shader stuff. There's no standard, as far as I know. On the art side, I think it's just Max/Maya w/ plugins and Show. On the programming side, probably just VC, maybe the DX effect edit utility or something. But you'd have to ask Brian to know for sure.
Torque Owner Josh Williams
Default Studio Name
Language:
-Very similar, syntactically and semantically, to Cg. See above for info on most of the HLSL language features. There are a few differences, however. Some examples:
-Cg supports the fixed data type, HLSL does not. (if I remember correctly :) -I'm not 100% sure, but I think I remember that there are some extremely nitty-gritty binding differences between Cg and HLSL. These haven't ever affected my programs in any practical way (that I'm aware of) -Some of the functions in the HLSL standard library have different names than equivalent functions in the Cg libraryThese differences are very minor (but that can make them annoying when trying to migrate from one language to another)
Compiling:
-HLSL's compile methodology differs from Cg's. HLSL is essentially converted to DX assembly shader code. See the below compiler comparison and analysis section for more information.
Short Description of GLslang
-Similar syntax to Cg and HLSL, but some differences.
GLslang language features:
-Data-type support is very similar to Cg, but many types have different names. Also supports specific handle types and user-defined structs (with embedded structs, but these must have a name). Like Cg, also supports many built- in type constructors. Has fairly robust type-casting support. Supports compound data structures (e.g. arrays of arrays and arrays of structs) -Variables may be declared anywhere, and scope rules follow the norm. Undeclared variables are not supported -Consts are supported -Operator support includes standard arithmetic, as well as standard logical, relational, and assignment operators -Standard control flow is supported with if, while, for, and do. Discard also included (stops the processing of a fragment shader) -Fairly robust support for arrays -Has robust swizzling features -Supports function overloading. :) -C and C++ style comments are supported -Has a large built-in library which defines many, many common functions, variables, and constants -Pre-processor support (#define #if-stuff, #pragma, #error, some others) -Does not support switch, operator overloading, enums, pointers, recursive functions, #include, bit-wise operators, classes, templates, namespaces, or exception- handlers -No user-defined variable or function name can start with the string "gl_", nor can it contain two consecutive underscore characters -Please note: this is not intended to be a complete reference -Also note: Unlike Cg, if a GLslang shader conforms to the language specification, it is expected to compileAs you can see, the language features of GLslang and Cg/HLSL are pretty similar. The syntax is a bit different, but even that is pretty similar. What you can do with each language, in terms of shader capabilities, is also pretty similar. The built in functions, constants, and variables differ greatly. I might get into the language differences more some other time.
Compiling:
In GLslang, compiling happens inside the OpenGL driver. See the discussion below for more information.