GL_TEXTURE_CUBE_MAP_SEAMLESS on OS X
One of the nice features included in the OpenGL 3.2 Core Profile is seamless cube map filtering, which (as the name suggests) helps reduce seams that can appear from sampling near the edges of a cube map face. It’s easy to enable:
(GL_TEXTURE_CUBE_MAP_SEAMLESS); glEnable
So I enabled it on a Mac application I’m developing, to great effect on my development platform (ATI 4870). However, I soon discovered that when running the same application on an older Nvidia 9600, the results were quite different. In fact, enabling seamless sampling would made the app unusable — only a black screen would be rendered, even when using a shader program that had no cube map samplers. While trying to reduce the problem to a simpler test case, I stumbled upon a very useful snippet of code:
, gpuFragment;
GLint gpuVertex(CGLGetCurrentContext(), kCGLCPGPUVertexProcessing, &gpuVertex);
CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUFragmentProcessing, &gpuFragment); CGLGetParameter
What this does — in case it isn’t already obvious — is check to see if GPU (e.g. hardware) processing is enabled for vertex and fragment processing. I’ve sometimes wondered how to do this (OpenGL Shader Builder displays this information). So now when I enable GL_TEXTURE_CUBE_MAP_SEAMLESS
I use this check for software fallback and disable it if so. Strangely, it’s the vertex processing that returns 0 in this case.