Forums » Bugs
So I unpacked my new iMac Intel with its shiny new 128MB Radeon X1600, get it set up, and go to check out VO. I get to see the world in all its shiny glory, up until I enter the game, and see this:
http://homepage.mac.com/ctishman/dump0000.png
The glow effect seems to be stretching from the origin point to wherever it ought to be, and leaving a visible trail along the way.
http://homepage.mac.com/ctishman/dump0000.png
The glow effect seems to be stretching from the origin point to wherever it ought to be, and leaving a visible trail along the way.
Yep, that's a known issue on the iMac Intel machines.
I'm still trying to figure that one out.
I'm still trying to figure that one out.
If it helps, it seems only to happen when the glowing area is touching an edge of the screen.
Also, would it be possible to get 1280x960 and 1680x1050 screen rezzes in the next patch? That's native for the iMac's display.
Also, would it be possible to get 1280x960 and 1680x1050 screen rezzes in the next patch? That's native for the iMac's display.
I get 1440 x 960 already on the G5 iMac, it's always been available.
VO queries for the available resolutions, so if your hardware supports the res, it will be shown.
If it isn't, compare VO's list of available resolutions (ignoring bpp) with macosx's list in the system properties.
If it isn't, compare VO's list of available resolutions (ignoring bpp) with macosx's list in the system properties.
ahh, okay.
To clarify one of ctishman's comments, even though the bug is only visible when the source object is on the edge of the screen, there is no glow AT ALL when the source object is away from the edge. For example, when the blue docking arrows are on the edge you get a get a blue origin-to-edge smear, but when they are in the center of the screen you get nothing at all. Looking forward to seeing this working!
Yes, I know about the issue. Unfortunately, I am unable to figure out why it's doing that.
Using OpenGL Profiler I noticed that the only time glGetFloatv and glLoadMatrixf are used is when scene glow is turned on. I'm guessing glGetFloatv is used to pull matrix out of OpenGL to be changed manually, and then is stuck back in using glLoadMatrixf. Obviously I haven't seen the source code, but might I suggest checking out the code that changes that matrix when it is outside of OpenGL?
I've played with projection matrices quite a bit in the past, so is there any chance for the portion of the code where that matrix is manipulated being posted?
I've played with projection matrices quite a bit in the past, so is there any chance for the portion of the code where that matrix is manipulated being posted?
You know, when I first saw the problem I thought it was a texture UV problem, but I was unable to duplicate the problem. My second thought was the projection matrix was screwed up. The code that generates the backgrounds is also used when doing glow. It's the code in the render-to-texture routine that stores the matrices and flips the rendered scene up-side down and then restores the matrices.
I'll look at that code to see if there's any platform/endian issue, but I can't see why there would be. I basically do a glGetFloatv(GL_PROJECTION_MATRIX, &projection[0]);
and then change render context and then
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(&projection[0]);
where
GLfloat projection[16];
Thanks for the help.
I'll look at that code to see if there's any platform/endian issue, but I can't see why there would be. I basically do a glGetFloatv(GL_PROJECTION_MATRIX, &projection[0]);
and then change render context and then
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(&projection[0]);
where
GLfloat projection[16];
Thanks for the help.
My first thought is about whether you are copying across the modelview matrix along with the projection matrix. If not, you might want to try that. Also, I've never really seen an array passed into glGetFloatfv the way you are doing it. Here is what I would have done:
GLfloat projection[16];
GLfloat modelview[16];
glGetFloatv(GL_PROJECTION_MATRIX, projection);
glGetFloatv(GL_MODELVIEW_MATRIX, modelview);
and then change render context and then
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projection);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelview);
It looks like you are passing a pointer to the first element of the array rather than passing the array pointer itself. I have no idea if this would effect anything though.
GLfloat projection[16];
GLfloat modelview[16];
glGetFloatv(GL_PROJECTION_MATRIX, projection);
glGetFloatv(GL_MODELVIEW_MATRIX, modelview);
and then change render context and then
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projection);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelview);
It looks like you are passing a pointer to the first element of the array rather than passing the array pointer itself. I have no idea if this would effect anything though.
I also copy the modelview matrix and &projection[0] gives the same result as your way, unless the compiler is broke. I wouldn't put it past being broke, though.
That code is from way back when some compiler version on some platform complained that it couldn't go from GLfloat[16] to GLfloat*.
That code is from way back when some compiler version on some platform complained that it couldn't go from GLfloat[16] to GLfloat*.
Well, I think I actually found something useful!
After switching vendetta client over to Rosetta (which was way harder than expected) I ran an OpenGL trace to see which calls were being made. On my Intel iMac I found the following in one of the traces:
1998: glPushMatrix();
1999: glLoadIdentity();
2000: glOrtho(0, 3.03865e-319, 3.03865e-319, 0, 3.04498e-319, 3.03865e-319);
2001: glMatrixMode(GL_MODELVIEW);
2002: glPushMatrix();
2003: glLoadIdentity();
....
....
2019: glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 256, 256);
2020: glFlush();
2021: glGetFloatv(GL_MODELVIEW_MATRIX, 0xbfffc530);
2022: glGetFloatv(GL_PROJECTION_MATRIX, 0xbfffc570);
2023: glFlush();
2024: glMatrixMode(GL_PROJECTION);
2025: glLoadMatrixf({0.977419, 0, 0, 0, 0, 1.30323, 0, 0, 0, 0, -1.00003, -1, -0, -0, -2.00003, -0});
2026: glMatrixMode(GL_MODELVIEW);
2027: glLoadMatrixf({-0.599983, -0.0242058, 0.799646, 0, 0.529706, -0.761073, 0.374406, 0, -0.599526, -0.648214, -0.469453, 0, 129.48, 288.316, -35.1125, 1});
2028: glViewport(0, 0, 256, 256);
2029: glBindTexture(GL_TEXTURE_2D, 2);
There is very obviously a problem at line 2000. After running the same trace on my old Powerbook (which has glow functioning properly) line 2000 would read glOrtho(0,1,1,0,-1,1);
Comparing the projection and modelview matrices at lines 2025 and 2027 with those from the Powerbook trace, I found that they are reasonably similar, so the matrices aren't the issue. I think the problem is related to the bad values in glOrtho. It looks like they originate from non-double precision being passed as double precision numbers. Maybe some typecasting would help?
Now it's time for sleep. :)
PS. Apologies for spamming this post with profiler code.
After switching vendetta client over to Rosetta (which was way harder than expected) I ran an OpenGL trace to see which calls were being made. On my Intel iMac I found the following in one of the traces:
1998: glPushMatrix();
1999: glLoadIdentity();
2000: glOrtho(0, 3.03865e-319, 3.03865e-319, 0, 3.04498e-319, 3.03865e-319);
2001: glMatrixMode(GL_MODELVIEW);
2002: glPushMatrix();
2003: glLoadIdentity();
....
....
2019: glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 256, 256);
2020: glFlush();
2021: glGetFloatv(GL_MODELVIEW_MATRIX, 0xbfffc530);
2022: glGetFloatv(GL_PROJECTION_MATRIX, 0xbfffc570);
2023: glFlush();
2024: glMatrixMode(GL_PROJECTION);
2025: glLoadMatrixf({0.977419, 0, 0, 0, 0, 1.30323, 0, 0, 0, 0, -1.00003, -1, -0, -0, -2.00003, -0});
2026: glMatrixMode(GL_MODELVIEW);
2027: glLoadMatrixf({-0.599983, -0.0242058, 0.799646, 0, 0.529706, -0.761073, 0.374406, 0, -0.599526, -0.648214, -0.469453, 0, 129.48, 288.316, -35.1125, 1});
2028: glViewport(0, 0, 256, 256);
2029: glBindTexture(GL_TEXTURE_2D, 2);
There is very obviously a problem at line 2000. After running the same trace on my old Powerbook (which has glow functioning properly) line 2000 would read glOrtho(0,1,1,0,-1,1);
Comparing the projection and modelview matrices at lines 2025 and 2027 with those from the Powerbook trace, I found that they are reasonably similar, so the matrices aren't the issue. I think the problem is related to the bad values in glOrtho. It looks like they originate from non-double precision being passed as double precision numbers. Maybe some typecasting would help?
Now it's time for sleep. :)
PS. Apologies for spamming this post with profiler code.
Cool, thanks for the info.
Yeah, it looks like the compiler is not doing something right.
I'll take a look at the disassembly and see if the intel version of the code is not right. ([edit] grr. gdb doesn't let me see intel disassembly. I'll have to use that other thing which i don't remember right now.)
my line of code is this:
glOrtho(0, 1, 1, 0, -1.0, 1.0)
I call glOrtho in other places like this:
glOrtho(0.0, 1.0, 1.0, 0.0, -1.0, 1.0)
and that works fine.
Compiler bug I would assume.
I'll change the 1,1 to 1.0, 1.0 but I don't know if that will solve the problem because the last two numbers are already that way.
Yeah, it looks like the compiler is not doing something right.
I'll take a look at the disassembly and see if the intel version of the code is not right. ([edit] grr. gdb doesn't let me see intel disassembly. I'll have to use that other thing which i don't remember right now.)
my line of code is this:
glOrtho(0, 1, 1, 0, -1.0, 1.0)
I call glOrtho in other places like this:
glOrtho(0.0, 1.0, 1.0, 0.0, -1.0, 1.0)
and that works fine.
Compiler bug I would assume.
I'll change the 1,1 to 1.0, 1.0 but I don't know if that will solve the problem because the last two numbers are already that way.
oh yeah i can use otool.
Anyways, It doesn't make much sense. it's pulling 1.0 out of some seemingly random place in memory when it calls the broken glOrtho, but it pulls it out of some other place in the correct glOrtho.
movsd 0x0001ef15(%ebx),%xmm0 ; the good one
movsd 0x00015e2b(%ebx),%xmm0 ; the broken one
A third way is this:
movl $__ZN9OpenGLRef19GetProjectionMatrixERA4_A4_f.eh,0x28(%esp,1)
movl $0x3ff00000,0x2c(%esp,1)
which I am assuming the first part is 0 and the second part is what makes it 1.
I don't know what's in ebx because it is never set in the function. I don't think it's the this pointer.
Anyways, It doesn't make much sense. it's pulling 1.0 out of some seemingly random place in memory when it calls the broken glOrtho, but it pulls it out of some other place in the correct glOrtho.
movsd 0x0001ef15(%ebx),%xmm0 ; the good one
movsd 0x00015e2b(%ebx),%xmm0 ; the broken one
A third way is this:
movl $__ZN9OpenGLRef19GetProjectionMatrixERA4_A4_f.eh,0x28(%esp,1)
movl $0x3ff00000,0x2c(%esp,1)
which I am assuming the first part is 0 and the second part is what makes it 1.
I don't know what's in ebx because it is never set in the function. I don't think it's the this pointer.
I thought about it some more and it doesn't seem like a random place in memory. 3.03865e-319 is a double precision 1 with endian reversed and 3.04498e-319 is -1 with endian reversed. Calling glOrtho(0, 1, 1, 0, -1.0, 1.0) might confuse the compiler because it is looking for which overloaded function in OpenGL to use and you passed it half integers, half floats. Decimal points on all six values might clear it up, or typecasting them all as (GLfloat) might as well. Hope this helps!
By the way, the reason I'm so invested in this issue is that I really want to see the glow work on my fancy new iMac. :P
By the way, the reason I'm so invested in this issue is that I really want to see the glow work on my fancy new iMac. :P
I meant the random place in memory is where it's finding the number, not the number itself. I understand that the number is byte-swapped, but I have no idea why.
I added the .0 and even did some other things like add .0f and put the 1.0 into a variable and sent the variable to the function (which the optimizing compiler 'fixed') and all the disassembles are identical. No idea what's going on.
Thanks a lot for helping.
[edit]
I was reading the disassemble wrong.
the __ZN9OpenGLRef19GetProjectionMatrixERA4_A4_f.eh stuff is otool trying to convert the hex value of 0x00000000 into an address.
[edit]
Looking at the disassembly, the code is definitely putting -1 in as the proper endianess, assuming 0xbff0000000000000 is -1.0 as a double.
The code is doing the same thing for all of the -1.0 in glOrtho calls.
Now, if only i had an intel mac I could gdb it to see what is going on exactly.
I added the .0 and even did some other things like add .0f and put the 1.0 into a variable and sent the variable to the function (which the optimizing compiler 'fixed') and all the disassembles are identical. No idea what's going on.
Thanks a lot for helping.
[edit]
I was reading the disassemble wrong.
the __ZN9OpenGLRef19GetProjectionMatrixERA4_A4_f.eh stuff is otool trying to convert the hex value of 0x00000000 into an address.
[edit]
Looking at the disassembly, the code is definitely putting -1 in as the proper endianess, assuming 0xbff0000000000000 is -1.0 as a double.
The code is doing the same thing for all of the -1.0 in glOrtho calls.
Now, if only i had an intel mac I could gdb it to see what is going on exactly.
It seems fairly certain that the problem is machine dependent. I mentioned before that I ran VO on my powerbook with scene glow working fine and the glOrtho calls looked great. The only difference that I could see from the results on my intel iMac was that glOrtho had an endian problem. Just to clarify, the program I was using to trace the GL calls is OpenGL Profiler and is provided by Apple. I used that on both machines. Is there a possibility that adding the decimal point actually fixed the problem, but you can't tell until you try it on an intel machine? The reason I consider this a possibility is that if the computer you are compiling on is non-intel, then perhaps what you are seeing in the profile information is what you are supposed to see, and that the only way to see it doing something wrong is on an intel machine.
If you could explain to me what gdb is I would be happy to try out a new build for you. I'll probably edit out the address once you respond.
[edit]
Thanks mr_spuck. By the way raybondo, 0xbff0000000000000 is only a double precision -1 on my iMac if you flip the endianess. Are you compiling on a pc or an older mac?
If you could explain to me what gdb is I would be happy to try out a new build for you. I'll probably edit out the address once you respond.
[edit]
Thanks mr_spuck. By the way raybondo, 0xbff0000000000000 is only a double precision -1 on my iMac if you flip the endianess. Are you compiling on a pc or an older mac?
I'm not running the program to test it. I am looking at the actual machine language. Putting in the decimals didn't change the compiler output at all so I don't see how it would fix the problem.
Maybe I'll just forget using glOrtho in that one particular place and calculate my own projection matrix.
Thanks, mr_spuck.
gdb comes with the Xcode toolset from Apple.
Maybe I'll just forget using glOrtho in that one particular place and calculate my own projection matrix.
Thanks, mr_spuck.
gdb comes with the Xcode toolset from Apple.