青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

永遠也不完美的程序

不斷學習,不斷實踐,不斷的重構……

常用鏈接

統計

積分與排名

好友鏈接

最新評論

Light Pre Pass in XNA: Basic Implementation

轉自:http://mquandt.com/blog/2009/12/light-pre-pass-in-xna-basic-implementation/

NOTE: This article is now obsolete. An up-to-date sample and article can be found at http://mquandt.com/blog/2010/03/light-pre-pass-round-2/

In this part I will cover how to implement the basic form of the Light Pre Pass renderer, with support for point lights, and the basic Blinn-Phong shader, including Albedo texture support.

As this article is fairly advanced in nature, I have to make certain assumptions about my audience, so that I do not spend half my time explaining basics. Firstly, you should have an understanding of basic concepts such as Cameras, Fullscreen Quads (including how to render one) and rendering a mesh with custom effects.

This pretty much means that as long as you have done some 3D work before, you should be fine. It would be best if you also knew XNA, as I will be using that to write this implementation, however as long as you can translate from C# and get the basic idea, that should be enough.

As you can see from these requirements, this article is not aimed at beginners, and if you are looking for tutorials on how to get started with XNA for 3D development, I would recommend you visit some great sites such as:

Those sites will help you get started with XNA, and once you are familiar and comfortable with the concepts behind 3D graphics, you can return here to learn an advanced renderer implementation.

My focus in this article will be on the implementation of the renderer, as a result, I will not be referring to the implementation of cameras or scene graphs.

Now that the housekeeping is out of the way, we can begin.

The Renderer in C#

Light Pre Pass (LPP), or Deferred Lighting, operates in 3 stages.

  1. Depth + Normals Rendering
  2. Light Rendering
  3. Materials Rendering

These 3 stages accumulate information into render targets, which are used by the next stage, until the Materials stage produces the final image. So the first thing we must do, is set up at least the following Render Targets:

  • Depth (SurfaceFormat.Single)
  • Normals (SurfaceFormat.Bgra1010102)
  • Lights (SurfaceFormat.Color)
1
2
3
4
depth = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Single, RenderTargetUsage.DiscardContents);
normals = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Bgra1010102, RenderTargetUsage.DiscardContents);
light = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.DiscardContents);
final = new RenderTarget2D(gfx, width, height, 1, SurfaceFormat.Color, RenderTargetUsage.DiscardContents);

We use Bgra1010102 for storage of normals because we want maximum precision for the 3 channels we are using. The closest format that provides 32bits of depth and 3 channels is 1010102, where there are 10 bits for the 3 channels we care about, giving greater precision over the 8 bits in a normal A8R8G8B8 (or Color) surface format.

The Materials, or final pass can be rendered directly to the Backbuffer, or to a Render Target, this depends on your needs, and is completely up to you. I have provided suggested SurfaceFormats above, however you can feel free to use your own, however note that the shaders I provide may not work [correctly] with your chosen format.

Depth + Normals

The first stage of the renderer, requires you to render the Depth and Normal values for each pixel to the screen. You can optionally render the position directly, however many post processing techniques use depth information, so why not render it now to re-use later.

First we must setup the render targets on our device, easily done with two lines of code:

1
2
gfx.SetRenderTarget(0, depth);
gfx.SetRenderTarget(1, normals);

For those who have not worked with multiple render targets before, the number in the above code indicates the render target index, and allows you to un-set and resolve the render target later.

Now you must first clear the render targets. As we are using multiple render targets, a simple call to GraphicsDevice.Clear will not suffice, instead we simply render a fullscreen quad using a cheap shader to write out the clear colours to the render targets.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
struct VS_OUT
{
    float4 Position        : POSITION;
};
  
VS_OUT vs_main(float3 position : POSITION)
{
    VS_OUT output = (VS_OUT)0;
    output.Position = float4(position, 1);
  
    return output;
}
  
struct PS_OUT
{
    float4 Depth : COLOR0;
    float4 Normals : COLOR1;
};
  
PS_OUT ps_main()
{
    PS_OUT output = (PS_OUT)0;
  
    output.Depth = 1.0f;
  
    output.Normals = float4(0, 0, 0, 1);
  
    return output;
}

Next you render the objects, using a special shader that writes the Depth and Normals to the two render targets. If you intend to implement Normal Mapping, or a similar technique, this is where you would calculate and combine the Normals. For the purposes of this article, only the basic per-vertex normals will be stored here.

One thing I had to do, was ensure a couple of render states were set correctly, specifically DepthBufferEnable and DepthBufferWriteEnable. Ensure both of these are set to true before continuing.

The Depth and Normals shader is quite simple. First the object is transformed as it would normally be when rendering, and then the Z and W values from the transformed position are passed to the pixel shader, alongside the Normal.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
struct VS_IN
{
    float4 Position   : POSITION;
    float4 Normal     : NORMAL0;
};
  
struct VS_OUT
{
    float4 Position   : POSITION;
    float4 Depth      : TEXCOORD0;
    float4 Normal     : TEXCOORD1;
};
  
VS_OUT depthNorm_VS(VS_IN input)
{
    VS_OUT output = (VS_OUT)0;
  
    float4x4 wvp = mul(World, ViewProjection);
  
    output.Position = mul(input.Position, wvp);
  
    output.Depth.xy = output.Position.zw;
  
    output.Normal = mul(World, input.Normal);
  
    return output;
}

If you look at your render targets, you may see a white image for the depth buffer, this is normal, as the differences in depth between most points on an object are miniscule, and close to 1. Your normals buffer however should look something like this:

normals

Inside the pixel shader, the Z value is divided by the W value to get the depth, and that is written to the first render target. Then the Normal is normalised and shifted from a range of [-1, 1] to [0, 1].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
struct PS_OUT
{
    float4 Depth : COLOR0;
    float4 Normals : COLOR1;
};
  
PS_OUT depthNorm_PS(float4 depth : TEXCOORD0, float4 normal : TEXCOORD1)
{
    PS_OUT output = (PS_OUT)0;
  
    output.Depth = depth.x / depth.y;
  
    output.Normals.rgb = 0.5f * (normalize(normal) + 1.0f);
  
    // Set alpha for both Depth and Normals to 1 (for some reason its required)
    output.Depth.a = 1.0f;
    output.Normals.a = 1.0f;
  
    return output;
}

Now that we have our Depth and Normal values stored in the render targets, we can resolve and get their respective textures so that the lights can be rendered using this data. This is quite simple in XNA, just set the render targets on the graphics device to either another render target, or null. In this case, we can set RT0 to the light buffer, and set RT1 to null.

1
2
3
4
gfx.SetRenderTarget(0, light);
gfx.SetRenderTarget(1, null);
depthImage = depth.GetTexture();
normImage = normals.GetTexture();

Be sure to clear the light buffer to TransparentBlack, and then we can move on to rendering the lights.

In this first tutorial, I will implement point lights only. Check back for future tutorials about implementing other types of lights, like Directional Lights, etc.

Rendering the light stage is a little bit more complicated than the Depth + Normals stage. This time around, a number of Render States must be set in the beginning, and even more for each light based on the position of the camera.

Render States

The following render states must be set when drawing the lights, to take advantage of alpha blending for blending multiple overlapping lights.

1
2
3
4
5
6
7
gfx.RenderState.AlphaBlendEnable = true;
gfx.RenderState.SeparateAlphaBlendEnabled = false;
gfx.RenderState.AlphaBlendOperation = BlendFunction.Add;
gfx.RenderState.SourceBlend = Blend.One;
gfx.RenderState.DestinationBlend = Blend.One;
gfx.RenderState.DepthBufferEnable = false;
gfx.RenderState.DepthBufferWriteEnable = false;

Here we are disabling the Z-culling feature of the graphics card so that overlapping lights can be drawn, as well as enabling Alpha Blending over all channels of the render target so that the process of combining overlapping lights will be handled by hardware automatically. We also ensure that no modifications to the destination or source values are made during the blending stage, and that Additive blending is used. (Remember that lighting equations add multiple lights together)

Now you run through each light and set the CullMode render state based on where the camera frustum is located. If the frustum is inside or overlaps the light bounding volume (in this case a sphere), the CullMode needs to be set to CullClockwiseFace. CullCounterClockwiseFace should be set if the frustum is completely outside the light bounding volume. Remember to also ensure that the CullMode is set to CullCounterClockwiseFace after all of the lights have been rendered.

In the sample code, I use a Mesh to easily load and store the light volume, which for a Point Light, would be a sphere. A scaling matrix allows for the attenuation to be changed, so be sure to update any matrices as needed.

Some notes about the next code sample:

  • cmanager is my CameraManager, it is used here to set the ViewProjection and InverseViewProjection matrices, which are required to transform the Depth back into a position for lighting.
  • caller is the Renderer class, which coordinates rendering each stage, as well as setting up and resolving the appropriate buffers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
public void DrawLightDeferred(GraphicsDevice gfx, CameraManager cmanager, Renderer caller)
{
    shader.Begin();
  
    // Set Matrix params
    cmanager.ApplyCameraParameters(ref shader);
    shader.Parameters.GetParameterBySemantic("WORLD").SetValue(world);
  
    // Set Depth and Normals buffers
    shader.Parameters["Depth_Tex"].SetValue(caller.GetDepthImage());
    shader.Parameters["Normals_Tex"].SetValue(caller.GetNormalsImage());
  
    // Set lighting params
    shader.Parameters["LightPos"].SetValue(_pos);
    shader.Parameters["Attenuation"].SetValue(_attenuation);
    shader.Parameters["SpecPower"].SetValue(SpecularPower);
    shader.Parameters["LightColor"].SetValue(LightColor.ToVector4());
  
    for (int j = 0; j < lightMesh.Meshes.Count; j++)
    {
        gfx.Indices = lightMesh.Meshes[j].IndexBuffer;
  
        for (int k = 0; k < lightMesh.Meshes[j].MeshParts.Count; k++)
        {
            for (int i = 0; i < shader.CurrentTechnique.Passes.Count; i++)
            {
                EffectPass pass = shader.CurrentTechnique.Passes[i];
                pass.Begin();
  
                gfx.VertexDeclaration = lightMesh.Meshes[j].MeshParts[k].VertexDeclaration;
  
                gfx.Vertices[0].SetSource(lightMesh.Meshes[j].VertexBuffer,
                    lightMesh.Meshes[j].MeshParts[k].StreamOffset,
                    lightMesh.Meshes[j].MeshParts[k].VertexStride);
  
                gfx.DrawIndexedPrimitives(PrimitiveType.TriangleList,
                    lightMesh.Meshes[j].MeshParts[k].BaseVertex,
                    lightMesh.Meshes[j].MeshParts[k].StartIndex,
                    lightMesh.Meshes[j].MeshParts[k].NumVertices,
                    lightMesh.Meshes[j].MeshParts[k].StartIndex,
                    lightMesh.Meshes[j].MeshParts[k].PrimitiveCount);
  
                pass.End();
            }
        }
    }
    shader.End();
}

Now I need to run through some helper methods I use in the upcoming point light shader. These methods handle transforming a position from Post Projection space, to Screen space, as well as calculating the half pixel offset required by DX9.

1
2
3
4
5
6
7
8
9
10
float2 postProjToScreen(float4 position)
{
    float2 screenPos = position.xy / position.w;
    return (0.5f * (float2(screenPos.x, -screenPos.y) + 1));
}
  
float2 halfPixel()
{
    return -(0.5f / float2(fViewportWidth, fViewportHeight));
}

These are simple enough, and more importantly, *just work*.

Now for the point light shader. Here the light volume is transformed as needed in a really simple vertex shader:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
struct VS_OUT
{
    float4 Position            : POSITION;
    float4 LightPosition    : TEXCOORD0;
};
  
VS_OUT vs_main(float4 inPos : POSITION)
{
    VS_OUT output = (VS_OUT)0;
  
    float4x4 wvp = mul(World, ViewProjection);
  
    output.Position = mul(inPos, wvp);
    output.LightPosition = output.Position;
  
    return output;
}

The following variables are also passed to the shader for lighting calculations:

1
2
3
4
5
6
float3 LightPos;
float Attenuation;
float SpecPower;
float4 LightColor;
float3 CamPos : VIEWPOSITION;
float3 EyeDepthRay;

The key code comes in the pixel shader. The first thing needed is to transform the position of the pixel from post projection space to screen space. This is handled by the helper method I mentioned earlier. Then the half pixel offset is deducted from the screen space position, so that the values read from the Depth and Normal buffers are correct.

1
2
3
4
5
6
// Transform from post-projection to texcoords
float2 screenPos = postProjToScreen(projPos);
// DX9 half pixel offset
float2 texCoord = screenPos - halfPixel();
  
float depth = tex2D(depthSampler, texCoord);

Next, read the depth from the Depth buffer, and if the value is not less than 1, we simply write a value of 0 for this pixel, as there is no depth information at that point, and nothing to light. If there is however, the lighting can be calculated for that point.

1
2
3
4
5
6
7
8
// Reconstruct position from screen space + depth
float4 position;
position.x = texCoord.x * 2 - 1;
position.y = (1 - texCoord.y) * 2 - 1;
position.z = depth;
position.w = 1.0f;
position = mul(position, InvViewProjection);
position.xyz /= position.w;

For more information on how to reconstruct a position based on a depth value, read this. There are also alternative, and improved methods listed there, which can be used depending on your needs.

Next the normal is acquired from the normal buffer, and restored to the [-1, 1] range so that it can be correctly used in the lighting calculations.

1
2
3
// Restore Normal
float3 normal = tex2D(normSampler, texCoord);
normal = normalize(2.0f * normal - 1.0f);

Now the lighting can begin. There are two key elements that need to be calculated for our light buffer: N.L and Attenuation. N.L is the basic element in every lighting equation, and simply consists of the dot product between the Normal and the Light Direction.

Attenuation is calculated by simply determining the ratio of distance to light over maximum attenuation, this is then flipped so that 0 is the furthest point from the light. Here I also pre-combine the attenuation and the N.L value. You can of course combine these later when writing out the buffer, ultimately it gives the same result.

1
2
3
4
5
6
7
// Attenuation Calcs
float3 lDir = LightPos - position;
float atten = saturate(1 - dot(lDir/Attenuation, lDir/Attenuation));
lDir = normalize(lDir);
  
// N.L
float nl = dot(normal, lDir) * atten;

Next we calculate the specular value. As we are using the Blinn-Phong lighting equation later on, the Half Vector is used instead of the reflection Vector, which ends up being a cheaper calculation for us. (Negligible for most modern systems – but visual difference is imperceptible)

For the purposes of this article, I will only include the code from the Blinn-Phong variant, however in the downloadable sample, I provide both methods that can be toggled with a boolean. (Change the technique to change the method)

Remember that this only affects the specular value, so do not worry that this will restrict you to the Blinn-Phong (or just Phong) lighting model.

1
2
float3 halfDir = normalize(lDir + camDir);
spec = pow(saturate(dot(normal, halfDir)), SpecPower);

Finally we generate the buffer and this is where we combine the light colour with the calculated N.L and Attenuation values.

1
return float4(LightColor.r, LightColor.g, LightColor.b, spec) * nl;

You should get something that looks like this: (Note that due to transparency this may look weird, however the essential part to note is the lights making up the shape of the model)

lights

Now we are entering the home stretch. All that is left to render now is the materials for each object. This is simply a matter of rendering each object again, and using the Light buffer to shade the object. Here is also where the material-flexibility of LPP comes into play, as each object uses its own shader.

To prepare for this stage, simple resolve the light buffer by setting either the backbuffer (null) or a “Final Image” render target as RT0. Then you can get the light texture, and provide it so the objects can use it when rendering.

This is the pixel shader:

1
2
3
4
5
6
7
8
9
float2 scrCoord = postProjToScreen(input.ScrCoord) - halfPixel();
  
float4 light = tex2D(lightSampler, scrCoord);
  
float3 texCol = tex2D(texSampler, input.TexCoord);
  
float3 lighting = saturate(AmbientLight + (light.rgb * texCol) + light.aaa);
  
return float4(lighting, 1);

Here I adjust by the half pixel offset and transform from post projection to screen space inside the vertex shader, so those calculations are as before, however I pass the corrected Texture Coordinate to the pixel shader.

As this material is a Blinn-Phong material, it is a rather simple equation. The “Sum of light colour multiplied by N.L and attenuation” is handled by the Alpha Blending and light shaders, so that simply needs to be multiplied by the texture (Albedo) colour, which is then added to the ambient light term and specular term to complete the lighting equation.

Finally this is done, you now have either a backbuffer, or render target filled with a lit scene.

final

There are many other materials which can be adapted to use the light buffer, and there is also a modification that can be done to the light buffer and final material shaders to allow for a material specular value, however I will leave those to future articles.

I hope this has been informative, and if you have any questions, please post them in the comments. Also be sure to check back for new tutorials covering different light types, materials, and other additions. I hope to get shadows implemented into the system, and also outline combining this with a forward renderer to allow for transparent objects and particles.

The screenshots in this post use 1000 point lights arranged in a 10x10x10 cube around the model.

posted on 2010-08-15 10:01 狂爛球 閱讀(1203) 評論(0)  編輯 收藏 引用 所屬分類: 圖形編程

青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            亚洲激情亚洲| 亚洲欧美日韩精品一区二区| 欧美与黑人午夜性猛交久久久| 亚洲看片免费| 欧美日韩综合网| 亚洲伊人网站| 小处雏高清一区二区三区| 国语自产偷拍精品视频偷| 老牛影视一区二区三区| 麻豆成人综合网| 中文亚洲免费| 欧美综合国产| 亚洲日本中文字幕| 一本久道综合久久精品| 国产日产欧美精品| 亚洲第一毛片| 欧美系列一区| 麻豆国产精品777777在线| 欧美激情亚洲国产| 欧美一区成人| 欧美成人综合一区| 性亚洲最疯狂xxxx高清| 久久婷婷丁香| 欧美一级大片在线观看| 久久免费黄色| 亚洲一区久久| 裸体一区二区三区| 午夜亚洲一区| 欧美成人免费在线视频| 欧美亚洲一区三区| 欧美电影在线播放| 久久久久国色av免费观看性色| 免费观看日韩| 久久精品国产久精国产爱| 欧美成人三级在线| 久久久久久精| 国产精品久久网站| 亚洲国产网站| 一区二区在线视频播放| 亚洲深夜影院| 日韩一级精品| 老色批av在线精品| 欧美专区在线观看一区| 欧美日韩高清免费| 欧美成人一区二区三区片免费| 国产精品第一区| 亚洲激情电影中文字幕| 伊人成人在线| 欧美在线视频免费播放| 午夜精品免费| 国产精品男女猛烈高潮激情| 亚洲另类自拍| 午夜伦欧美伦电影理论片| 免费成人小视频| 久久久久久一区二区| 国产精品三级视频| 日韩午夜av在线| 99re这里只有精品6| 久久人人97超碰国产公开结果| 亚洲欧美激情视频| 欧美日韩在线观看视频| 亚洲人体1000| 99国产精品自拍| 欧美欧美全黄| av成人天堂| 亚洲一区二区三区在线看 | 久久精品盗摄| 国产精品av免费在线观看| 日韩午夜在线视频| 亚洲网站视频福利| 国产精品国产一区二区| 一本久道久久综合中文字幕| 亚洲你懂的在线视频| 欧美午夜无遮挡| 亚洲欧美不卡| 久久精品中文字幕免费mv| 国产一区二区三区四区| 久久精品国产91精品亚洲| 玖玖精品视频| 亚洲精品久久久久久下一站 | 欧美激情视频在线播放| 亚洲精选91| 欧美亚洲免费| 国内精品视频久久| 美女任你摸久久| 亚洲精品一区二区在线| 欧美一级黄色网| 精品不卡视频| 欧美日韩高清不卡| 亚洲欧美综合国产精品一区| 另类图片综合电影| 亚洲精品五月天| 国产精品视频不卡| 久久久久久网站| 亚洲精品免费在线观看| 香蕉国产精品偷在线观看不卡| 国产手机视频精品| 欧美高清视频在线| 亚洲午夜一区二区| 欧美激情a∨在线视频播放| 中文亚洲免费| 在线观看中文字幕亚洲| 欧美手机在线视频| 久久午夜精品| 国产精品99久久久久久久久| 美女久久一区| 午夜久久资源| 99riav久久精品riav| 国产亚洲精品久| 欧美日韩成人在线视频| 久久精品噜噜噜成人av农村| 亚洲日本中文字幕免费在线不卡| 欧美亚洲一区二区在线| 日韩一二三区视频| 1769国产精品| 国产日产亚洲精品| 欧美午夜久久| 欧美一级久久久| 国产亚洲欧美一区在线观看| 久久一区二区精品| 亚洲综合激情| 亚洲免费av电影| 亚洲成人自拍视频| 久久久久久一区| 亚洲欧美亚洲| 在线综合+亚洲+欧美中文字幕| 精品成人一区二区三区| 国产免费观看久久| 国产精品第十页| 欧美日韩国产成人精品| 美女尤物久久精品| 欧美在线地址| 欧美一区二区三区在| 亚洲一区二区三区乱码aⅴ蜜桃女 亚洲一区二区三区乱码aⅴ | 亚洲女ⅴideoshd黑人| 亚洲激情在线| 亚洲福利一区| 亚洲福利在线视频| 欧美成人精品一区二区三区| 久久精品一区二区三区不卡牛牛| 亚洲一区二区在线| 亚洲午夜电影在线观看| 99热这里只有精品8| 91久久在线| 亚洲伦理在线观看| 亚洲九九精品| 在线视频精品一区| 亚洲午夜电影网| 亚洲欧美激情视频| 欧美在线视频观看免费网站| 欧美一区二视频在线免费观看| 销魂美女一区二区三区视频在线| 午夜精品成人在线视频| 欧美在线啊v| 久久精品免费电影| 噜噜爱69成人精品| 欧美成人精品| 亚洲精选一区二区| 亚洲午夜精品一区二区| 欧美一区二区在线视频| 久久久国产亚洲精品| 女同一区二区| 欧美日韩性视频在线| 国产精品国产三级国产普通话99 | 欧美精品 国产精品| 欧美劲爆第一页| 国产精品免费电影| 合欧美一区二区三区| 91久久黄色| 亚洲影院在线观看| 久久久久久久一区| 亚洲激情女人| 亚洲免费在线观看视频| 久久精品国产久精国产爱| 免费成人av在线| 欧美日韩一区在线| 国内精品久久久久久久果冻传媒| 在线精品视频免费观看| 亚洲午夜精品网| 久久久久国产免费免费| 亚洲人成7777| 欧美一区二区三区免费观看| 免费久久精品视频| 国产美女精品人人做人人爽| 一区二区在线观看av| 亚洲午夜伦理| 欧美14一18处毛片| 亚洲欧美韩国| 欧美成人一区二区在线| 国产日韩精品一区二区三区在线| 最新成人av在线| 久久精品一区二区三区四区| 亚洲理论在线| 久久亚洲一区| 国产午夜精品在线观看| 亚洲综合色自拍一区| 亚洲视频在线观看| 国产精品久久久久久久久久免费看| 久久久国产精品一区二区三区| 嫩草影视亚洲|