As part of an attempt to generate a very simple looking sky, I've created a skybox (basically a cube going from (-1, -1, -1) to (1, 1, 1), which is drawn after all of my geometry and forced to the back via the following simple vertex shader :
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 normal;
out Data
{
vec4 eyespace_position;
vec4 eyespace_normal;
vec4 worldspace_position;
vec4 raw_position;
} vtx_data;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
mat4 view_without_translation = view;
view_without_translation[3][0] = 0.0f;
view_without_translation[3][1] = 0.0f;
view_without_translation[3][2] = 0.0f;
vtx_data.raw_position = position;
vtx_data.worldspace_position = model * position;
vtx_data.eyespace_position = view_without_translation * vtx_data.worldspace_position;
gl_Position = (projection * vtx_data.eyespace_position).xyww;
}
Run Code Online (Sandbox Code Playgroud)
From this, I'm trying to have my sky display as a very simple gradient from a deep blue at the top to a lighter blue at the horizon.
Obviously, simply mixing my two colors based on the Y coordinate of each fragment is going to look very bad : the fact that you're looking at a box and not a dome is immediately clear, as seen here :

Note the fairly visible "corners" at the top left and top right of the box.
Instinctively, I was thinking that the obvious fix would be to normalize the position of each fragment, to get a position on a unit sphere, then take the Y coordinate of that. I thought that would result in a value that would be constant for a given "altitude", if that makes sense. Like this :
#version 330
in Data
{
vec4 eyespace_position;
vec4 eyespace_normal;
vec4 worldspace_position;
vec4 raw_position;
} vtx_data;
out vec4 outputColor;
const vec4 skytop = vec4(0.0f, 0.0f, 1.0f, 1.0f);
const vec4 skyhorizon = vec4(0.3294f, 0.92157f, 1.0f, 1.0f);
void main()
{
vec4 pointOnSphere = normalize(vtx_data.worldspace_position);
float a = pointOnSphere.y;
outputColor = mix(skyhorizon, skytop, a);
}
Run Code Online (Sandbox Code Playgroud)
The result however is much the same as the first screenshot (I can post it if necessary but since it's visually similar to the first, I'm skipping it to shorten this question right now).
After some random fiddling (cargo cult programming, I know :/), I realized that this works :
void main()
{
vec3 pointOnSphere = normalize(vtx_data.worldspace_position.xyz);
float a = pointOnSphere.y;
outputColor = mix(skyhorizon, skytop, a);
}
Run Code Online (Sandbox Code Playgroud)
The only difference is that I normalize the position without it's W component.
And here's the working result : (the difference is subtle in screenshots but quite noticeable in motion)

So, finally, my question : why does this work when the previous version fails ? I must be misunderstanding something extremely basic about homogenous coordinates but my brain just isn't clicking right now !
GLSL 本身normalize不处理齐次坐标。它将坐标解释为属于R^4。这通常不是您想要的。但是,如果 vtx_data.worldspace_position.w == 0,则标准化应该产生相同的结果。
我不知道是什么vec3 pointOnSphere = normalize(vtx_data.worldspace_position);意思,因为左侧vec4也应该有类型。
| 归档时间: |
|
| 查看次数: |
1873 次 |
| 最近记录: |