Forums » Community Projects
PowerPC OSX players take heart
I have written an app (PowerPC and possibly Tiger only, but I could make an Intel Mac build without much trouble) which, among other things, can all but eliminate station/asteroid/Leviathan lag on all newer cards and drastically improve performance on older ATI cards (pre-Radeon 8500) and possibly other cards as well.
However, use of this app violates the Vendetta EULA, as it replaces certain system OpenGL functions with its own code. Therefore, I won't post the link until a Dev gives me a go-ahead.
To the Devs:
The first performance boost mentioned above is achieved by disabling GL_APPLE_element_array, GL_APPLE_vertex_array_object, and GL_APPLE_vertex_array_range, the second by forcing all new OpenGL contexts with any depth buffer at all to use a 32-bit one.
-:sigma.SB
However, use of this app violates the Vendetta EULA, as it replaces certain system OpenGL functions with its own code. Therefore, I won't post the link until a Dev gives me a go-ahead.
To the Devs:
The first performance boost mentioned above is achieved by disabling GL_APPLE_element_array, GL_APPLE_vertex_array_object, and GL_APPLE_vertex_array_range, the second by forcing all new OpenGL contexts with any depth buffer at all to use a 32-bit one.
-:sigma.SB
I wants!
Anything that helps. I volunteer to test this on the 1st gen PPC iMac G5.
Funny that element_array, vertex_array_range, and vertex_array_object are supposed to make things faster, not slower. VO doesn't use the GL_APPLE_element_array and GL_APPLE_vertex_array_object extensions, anyways.
VO prefers GL_ARB_vertex_buffer_object over GL_APPLE_vertex_array_range, so if your hardware supports vbo (all recent cards do), then changing those shouldn't matter. the 8500 doesn't support vbo, though.
You should be able to turn those off by editing VO's config.ini file instead of resorting to an app that changes the code.
dovertexbuffers=0
doindexbuffers=0
Also, making sure
bpp=32
in the [Vendetta] section should use a 32bit depth buffer.
VO prefers GL_ARB_vertex_buffer_object over GL_APPLE_vertex_array_range, so if your hardware supports vbo (all recent cards do), then changing those shouldn't matter. the 8500 doesn't support vbo, though.
You should be able to turn those off by editing VO's config.ini file instead of resorting to an app that changes the code.
dovertexbuffers=0
doindexbuffers=0
Also, making sure
bpp=32
in the [Vendetta] section should use a 32bit depth buffer.
It works. Just can't use rglow.
Setting dovertexbuffers and doindexbuffers to 0 doesn't result in the same performance gain. And on older hardware (such as the Rage Pro I tested it on), C32D32 kills framerate badly, and so does C16D16, but not C16D32. I don't know why.
Edit: Yes, they are supposed to make things faster. Sometimes, they do. Apple's implementation of the concept seems fundamentally broken, though.
Edit 2: You do get the same performance boost if you set do_extensions=0, but then you can't have shaders.
-:sigma.SB
Edit: Yes, they are supposed to make things faster. Sometimes, they do. Apple's implementation of the concept seems fundamentally broken, though.
Edit 2: You do get the same performance boost if you set do_extensions=0, but then you can't have shaders.
-:sigma.SB
dovertexbuffers=0
doindexbuffers=0
whoah. big difference on my 9600...
doindexbuffers=0
whoah. big difference on my 9600...
If you ppl used Wintels, you wouldn't be in this bind!
*runs from the lynching mob*
*runs from the lynching mob*
Sad thing is he's right this time.
-:sigma.SB
-:sigma.SB
So, Ray/Andy/John/Michael and Solra, can we try this? Is it "approved?" I'll do almost ANYTHING to improve my VO framerate, which is around 10 fps in Deneb B-12 with massive spikes of around 3 seconds of NO Frames for explosions and whatnot, even with everything turned down to minimum.
Re. the dovertexbuffers/doindexbuffers thing... I was wrong. Setting them to 0 does get exactly the same performance boost. Something must've prevented my changes to config.ini from being accepted last time. (my money's on VO being open at the time. >_<)
There's still the "force 32-bit depth buffer" thing, which seems to improve performance anywhere from slightly to considerably in all cases. (test cases as this point include a GeForce 2, a GeForce 7800, a Rage 128, a Radeon 7500, a Radeon 9000, a Radeon 9200, a Radeon 9600, a Radeon 9700, and a Radeon 9800... oh, and that includes cases where the color buffer is 32-bit, too)
-:sigma.SB
There's still the "force 32-bit depth buffer" thing, which seems to improve performance anywhere from slightly to considerably in all cases. (test cases as this point include a GeForce 2, a GeForce 7800, a Rage 128, a Radeon 7500, a Radeon 9000, a Radeon 9200, a Radeon 9600, a Radeon 9700, and a Radeon 9800... oh, and that includes cases where the color buffer is 32-bit, too)
-:sigma.SB
I suppose you can use the program, but I would like to try to incorporate the concepts into VO directly. Having a 16bit color and 32bit depth buffer is faster than the other 2 modes? wacky.
OK I tried it and I can't tell any fps difference. I'll try again tomorrow I guess. Perhaps I'll be more lucid.
Oh yeah, I forgot about this thread.
Here it is:
http://sigma.tejat.net/GLStuff.zip
Oh, and ray, it seems that even in 32-bit color, VO only requests a 24-bit depth buffer. At least, on OSX. (I used OpenGL Profiler. >_>;) And it looks like it's using an 8-bit depth buffer for rglow, but I didn't check that closely.
-:sigma.SB
Edit: Posting as #sigma for the lose.
Edit 2: Oh, and I have more information about the VBO thing. It seems to improve performance considerably on newer (post-GeForce 3) nVidia cards (which is what it's supposed to do), but consistently hurts performance on ATI cards in every test case except one (and that was a single VBO for vertex data only). And it's not only VO's code, the test program I rigged exhibits the same behavior.
Edit 3: So, ahem, it might be helpful if the people who used this, with performance boosts or without, posted here. I know you're out there, you talk to me about it. Help the Devs here!
Here it is:
http://sigma.tejat.net/GLStuff.zip
Oh, and ray, it seems that even in 32-bit color, VO only requests a 24-bit depth buffer. At least, on OSX. (I used OpenGL Profiler. >_>;) And it looks like it's using an 8-bit depth buffer for rglow, but I didn't check that closely.
-:sigma.SB
Edit: Posting as #sigma for the lose.
Edit 2: Oh, and I have more information about the VBO thing. It seems to improve performance considerably on newer (post-GeForce 3) nVidia cards (which is what it's supposed to do), but consistently hurts performance on ATI cards in every test case except one (and that was a single VBO for vertex data only). And it's not only VO's code, the test program I rigged exhibits the same behavior.
Edit 3: So, ahem, it might be helpful if the people who used this, with performance boosts or without, posted here. I know you're out there, you talk to me about it. Help the Devs here!
VO always tries 32bit z-depth first, no matter what color depth is chosen.
If it fails to choose 32bit, it tries 16bit.
There is really no hardware out there anymore that has a full 32bit depth. They all internally have 24bit with 8bit stencil (which is unused and may actually be used for depth) so I think that's why you're seeing 24bit z-buffer in opengl profiler.
I verified that VO is choosing 32bit depth by printing out what the pixelformat values are that VO picks.
OpenGL Profiler keeps crashing when I try using it on VO.
If it fails to choose 32bit, it tries 16bit.
There is really no hardware out there anymore that has a full 32bit depth. They all internally have 24bit with 8bit stencil (which is unused and may actually be used for depth) so I think that's why you're seeing 24bit z-buffer in opengl profiler.
I verified that VO is choosing 32bit depth by printing out what the pixelformat values are that VO picks.
OpenGL Profiler keeps crashing when I try using it on VO.
Whatever happened to this little wonder of an app (assuming it worked)? I'd be willing to test an intel build on my MBP, if you had the time to compile one, Solra. While my shiny, new MacBook Pro used to run VO at around 60-100 fps even at maximum settings, the latest patches have reduced my framerate considerably, up until where I can even notice it visually and tactilely in some station and fog sectors.
Because of this, an Intel build could be truly useful for us maximizers out there (people who used to run everything on a iMac 837Ghz, and now that they have a new intel mac compulsively downloads every game they can find, os x or windows, maximizes the graphical settings, and fret and mope all day if the game doesn't run perfectly...)
Because of this, an Intel build could be truly useful for us maximizers out there (people who used to run everything on a iMac 837Ghz, and now that they have a new intel mac compulsively downloads every game they can find, os x or windows, maximizes the graphical settings, and fret and mope all day if the game doesn't run perfectly...)
http://sigma.tejat.net/GLStuff86.zip
-:sigma.SB
-:sigma.SB
Solra, I can tell no difference in fps with your app installed. (Yes, I installed in vendettaclient.app, not vendetta.app - both, actually heh)
Looking directly at the station in Deneb O-3, I get around 40 fps, both before and after installing the thing.
iMac G5 1.8 HGz, onboard nVidia GeForce FX 5200 64MB card.
Tell me how I can be more precise in my measurements and I shall do so.
Looking directly at the station in Deneb O-3, I get around 40 fps, both before and after installing the thing.
iMac G5 1.8 HGz, onboard nVidia GeForce FX 5200 64MB card.
Tell me how I can be more precise in my measurements and I shall do so.