青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

Soft-Edged Shadows

Soft-Edged Shadows
by Anirudh.S Shastry


ADVERTISEMENT <a target="_blank"> <img src="http://m1.2mdn.net/1001314/flip_300.gif" border="0" /> </a>

Introduction

Originally, dynamic shadowing techniques were possible only in a limited way. But with the advent of powerful programmable graphicshardware, dynamic shadow techniques have nearly completely replaced static techniques like light mapping and semi-dynamic techniques like projected shadows. Two popular dynamic shadowing techniques are shadow volumes and shadow mapping.

A closer look

The shadow volumes technique is a geometry based technique that requires the extrusion of the geometry in the direction of the light to generate a closed volume. Then, via ray casting, the shadowed portions of the scene can be determined (usually the stencil buffer is used to simulate ray-casting). This technique is pixel-accurate and doesn't suffer from any aliasing problems, but as with any technique, it suffers from its share of disadvantages. Two major problems with this technique are that it is heavily geometry dependent and fill-rate intensive. Because of this, shadow mapping is slowly becoming more popular.

Shadow mapping on the other hand is an image space technique that involves rendering the scene depth from the light's point of view and using this depth information to determine which portions of the scene in shadow. Though this technique has several advantages, it suffers from aliasing artifacts and z-fighting. But there are solutions to this and since the advantages outweigh the disadvantages, this will be the technique of my choice in this article.

Soft shadows

Hard shadows destroy the realism of a scene. Hence, we need to fake soft shadows in order to improve the visual quality of the scene. A lot of over-zealous PHD students have come up with papers describing soft shadowing techniques, but in reality, most of these techniques are not viable in real-time, at least when considering complex scenes. Until we have hardware that can overcome some of the limitations of these techniques, we will need to stick to more down-to-earth methods.

In this article, I present an image space method to generate soft-edged shadows using shadow maps. This method doesn't generate perfectly soft shadows (no umbra-penumbra). But it not only solves the aliasing problems of shadow mapping, it improves the visual quality by achieving aesthetically pleasing soft edged shadows.

So how does it work?

First, we generate the shadow map as usual by rendering the scene depth from the light's point of view into a floating point buffer. Then, instead of rendering the scene with shadows, we render the shadowed regions into a screen-sized buffer. Now, we can blur this using a bloom filter and project it back onto the scene in screen space. Sounds simple right?

In this article, we only deal with spot lights, but this technique can easily be extended to handle point lights as well.

Here are the steps:

  • Generate the shadow map as usual by writing the scene depth into a floating point buffer.
  • Render the shadowed portions of the scene after depth comparison into fixed point texture, without any lighting.
  • Blur the above buffer using a bloom filter (though we use a separable Gaussian filter in this article, any filter can be used).
  • Project the blurred buffer onto the scene in screen space to get cool soft-edged shadows, along with full lighting.

Step 1: Rendering the shadow map

First, we need to create a texture that can hold the scene depth. Since we need to use this as a render target, we will also need to create a surface that holds the texture's surface data. The texture must be a floating point one because of the large range of depth values. The R32F format has sufficient precision and so we use it. Here's the codelet that is used to create the texture.

				// Create the shadow map
				if( FAILED( g_pd3dDevice->CreateTexture( SHADOW_MAP_SIZE, SHADOW_MAP_SIZE, 1,
                                         D3DUSAGE_RENDERTARGET, D3DFMT_R32F,
                                         D3DPOOL_DEFAULT, &g_pShadowMap,
                                         NULL ) ) )
{
   MessageBox( g_hWnd, "Unable to create shadow map!",
               "Error", MB_OK | MB_ICONERROR );
   return E_FAIL;
}

// Grab the texture's surface
g_pShadowMap->GetSurfaceLevel( 0, &g_pShadowSurf );

Now, to generate the shadow map, we need to render the scene's depth to the shadow map. To do this, we must render the scene with the light's world-view-projection matrix. Here's how we build that matrix.

				// Ordinary view matrix
D3DXMatrixLookAtLH( &matView, &vLightPos, &vLightAim, &g_vUp );
// Projection matrix for the light
D3DXMatrixPerspectiveFovLH( &matProj, D3DXToRadian(30.0f), 1.0f, 1.0f, 1024.0f );
// Concatenate the world matrix with the above two to get the required matrix
matLightViewProj = matWorld * matView * matProj;

Here are vertex and pixel shaders for rendering the scene depth.

				// Shadow generation vertex shader
				struct VSOUTPUT_SHADOW
{
   float4 vPosition    : POSITION;
   float  fDepth       : TEXCOORD0;
};

VSOUTPUT_SHADOW VS_Shadow( float4 inPosition : POSITION )
{
   // Output struct
   VSOUTPUT_SHADOW OUT = (VSOUTPUT_SHADOW)0;
   // Output the transformed position
   OUT.vPosition = mul( inPosition, g_matLightViewProj );
   // Output the scene depth
   OUT.fDepth = OUT.vPosition.z;
   return OUT;
}

Here, we multiply the position by the light's world-view-projection matrix (g_matLightViewProj) and use the transformed position's z-value as the depth. In the pixel shader, we output the depth as the color.

				float4  PS_Shadow( VSOUTPUT_SHADOW IN ) : COLOR0
{
   // Output the scene depthreturn float4( IN.fDepth, IN.fDepth, IN.fDepth, 1.0f );
}

Voila! We have the shadow map. Below is a color coded version of the shadow map, dark blue indicates smaller depth values, whereas light blue indicates larger depth values.

Step 2: Rendering the shadowed scene into a buffer

Next, we need to render the shadowed portions of the scene to an offscreen buffer so that we can blur it and project it back onto the scene. To do that, we first render the shadowed portions of the scene into a screen-sized fixed point texture.

				// Create the screen-sized buffer map
				if( FAILED( g_pd3dDevice->CreateTexture( SCREEN_WIDTH, SCREEN_HEIGHT, 1,
            D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &g_pScreenMap, NULL ) ) )
{
   MessageBox( g_hWnd, "Unable to create screen map!",
               "Error", MB_OK | MB_ICONERROR );
   return E_FAIL;
}
// Grab the texture's surface
g_pScreenMap->GetSurfaceLevel( 0, & g_pScreenSurf );

To get the projective texture coordinates, we need a "texture" matrix that will map the position from projection space to texture space.

				// Generate the texture matrix
				float fTexOffs = 0.5 + (0.5 / (float)SHADOW_MAP_SIZE);
D3DXMATRIX matTexAdj( 0.5f,     0.0f,     0.0f, 0.0f,
                      0.0f,     -0.5f,    0.0f, 0.0f,
                      0.0f,     0.0f,     1.0f, 0.0f,
                      fTexOffs, fTexOffs, 0.0f, 1.0f );

matTexture = matLightViewProj * matTexAdj;

We get the shadow factor as usual by depth comparison, but instead of outputting the completely lit scene, we output only the shadow factor. Here are the vertex and pixel shaders that do the job.

				// Shadow mapping vertex shader
				struct VSOUTPUT_UNLIT
{
   float4 vPosition   : POSITION;
   float4 vTexCoord   : TEXCOORD0;
   float  fDepth      : TEXCOORD1;
};

VSOUTPUT_UNLIT VS_Unlit( float4 inPosition : POSITION )
{
   // Output struct
   VSOUTPUT_UNLIT OUT = (VSOUTPUT_UNLIT)0;

   // Output the transformed position
   OUT.vPosition = mul( inPosition, g_matWorldViewProj );

   // Output the projective texture coordinates
   OUT.vTexCoord = mul( inPosition, g_matTexture );

   // Output the scene depth
   OUT.fDepth = mul( inPosition, g_matLightViewProj ).z;

   return OUT;
}

We use percentage closer filtering (PCF) to smoothen out the jagged edges. To "do" PCF, we simply sample the 8 (we're using a 3x3 PCF kernel here) surrounding texels along with the center texel and take the average of all the depth comparisons.

				// Shadow mapping pixel shader
				float4  PS_Unlit( VSOUTPUT_UNLIT IN ) : COLOR0
{
   // Generate the 9 texture co-ordinates for a 3x3 PCF kernelfloat4 vTexCoords[9];
   // Texel sizefloat fTexelSize = 1.0f / 1024.0f;

   // Generate the tecture co-ordinates for the specified depth-map size
   // 4 3 5
   // 1 0 2
   // 7 6 8
   vTexCoords[0] = IN.vTexCoord;
   vTexCoords[1] = IN.vTexCoord + float4( -fTexelSize, 0.0f, 0.0f, 0.0f );
   vTexCoords[2] = IN.vTexCoord + float4(  fTexelSize, 0.0f, 0.0f, 0.0f );
   vTexCoords[3] = IN.vTexCoord + float4( 0.0f, -fTexelSize, 0.0f, 0.0f );
   vTexCoords[6] = IN.vTexCoord + float4( 0.0f,  fTexelSize, 0.0f, 0.0f );
   vTexCoords[4] = IN.vTexCoord + float4( -fTexelSize, -fTexelSize, 0.0f, 0.0f );
   vTexCoords[5] = IN.vTexCoord + float4(  fTexelSize, -fTexelSize, 0.0f, 0.0f );
   vTexCoords[7] = IN.vTexCoord + float4( -fTexelSize,  fTexelSize, 0.0f, 0.0f );
   vTexCoords[8] = IN.vTexCoord + float4(  fTexelSize,  fTexelSize, 0.0f, 0.0f );
   // Sample each of them checking whether the pixel under test is shadowed or notfloat fShadowTerms[9];
   float fShadowTerm = 0.0f;
   for( int i = 0; i < 9; i++ )
   {
      float A = tex2Dproj( ShadowSampler, vTexCoords[i] ).r;
      float B = (IN.fDepth ?0.1f);

      // Texel is shadowed
      fShadowTerms[i] = A < B ? 0.0f : 1.0f;
      fShadowTerm     += fShadowTerms[i];
   }
   // Get the average
   fShadowTerm /= 9.0f;
   return fShadowTerm;
}

The screen buffer is good to go! Now all we need to do is blur this and project it back onto the scene in screen space.

Step 3: Blurring the screen buffer

We use a seperable gaussian filter to blur the screen buffer, but one could also use a Poisson filter. The render targets this time are A8R8G8B8 textures accompanied by corresponding surfaces. We need 2 render targets, one for the horizontal pass and the other for the vertical pass.

				// Create the blur maps
				for( int i = 0; i < 2; i++ )
{
   if( FAILED( g_pd3dDevice->CreateTexture( SCREEN_WIDTH, SCREEN_HEIGHT, 1,
                                            D3DUSAGE_RENDERTARGET,
                                            D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
                                            &g_pBlurMap[i], NULL ) ) )
   {
      MessageBox( g_hWnd, "Unable to create blur map!",
                  "Error", MB_OK | MB_ICONERROR );
      return E_FAIL;
   }
  // Grab the texture's surface
   g_pBlurMap[i]->GetSurfaceLevel( 0, & g_pBlurSurf[i] );
}

We generate 15 Gaussian offsets and their corresponding weights using the following functions.

				float GetGaussianDistribution( float x, float y, float rho )
{
   float g = 1.0f / sqrt( 2.0f * 3.141592654f * rho * rho );
   return g * exp( -(x * x + y * y) / (2 * rho * rho) );
}

void GetGaussianOffsets( bool bHorizontal, D3DXVECTOR2 vViewportTexelSize,
                         D3DXVECTOR2* vSampleOffsets, float* fSampleWeights )
{
   // Get the center texel offset and weight
   fSampleWeights[0] = 1.0f * GetGaussianDistribution( 0, 0, 2.0f );
   vSampleOffsets[0] = D3DXVECTOR2( 0.0f, 0.0f );
   // Get the offsets and weights for the remaining tapsif( bHorizontal )
   {
      for( int i = 1; i < 15; i += 2 )
      {
         vSampleOffsets[i + 0] = D3DXVECTOR2( i * vViewportTexelSize.x, 0.0f );
         vSampleOffsets[i + 1] = D3DXVECTOR2( -i * vViewportTexelSize.x, 0.0f );
         fSampleWeights[i + 0] = 2.0f * GetGaussianDistribution( float(i + 0), 0.0f, 3.0f );
         fSampleWeights[i + 1] = 2.0f * GetGaussianDistribution( float(i + 1), 0.0f, 3.0f );
      }
   }
   else 
   {
      for( int i = 1; i < 15; i += 2 )
      {
         vSampleOffsets[i + 0] = D3DXVECTOR2( 0.0f, i * vViewportTexelSize.y );
         vSampleOffsets[i + 1] = D3DXVECTOR2( 0.0f, -i * vViewportTexelSize.y );
         fSampleWeights[i + 0] = 2.0f * GetGaussianDistribution( 0.0f, float(i + 0), 3.0f );
         fSampleWeights[i + 1] = 2.0f * GetGaussianDistribution( 0.0f, float(i + 1), 3.0f );
      }
   }
}

To blur the screen buffer, we set the blur map as the render target and render a screen sized quad with the following vertex and pixel shaders.

				// Gaussian filter vertex shader
				struct VSOUTPUT_BLUR
{
   float4 vPosition    : POSITION;
   float2 vTexCoord    : TEXCOORD0;
};

VSOUTPUT_BLUR VS_Blur( float4 inPosition : POSITION, float2 inTexCoord : TEXCOORD0 )
{
   // Output struct
   VSOUTPUT_BLUR OUT = (VSOUTPUT_BLUR)0;
   // Output the position
   OUT.vPosition = inPosition;
   // Output the texture coordinates
   OUT.vTexCoord = inTexCoord;
   return OUT;
}
// Horizontal blur pixel shaderfloat4 PS_BlurH( VSOUTPUT_BLUR IN ): COLOR0
{
   // Accumulated colorfloat4 vAccum = float4( 0.0f, 0.0f, 0.0f, 0.0f );
   // Sample the taps (g_vSampleOffsets holds the texel offsets
   // and g_fSampleWeights holds the texel weights)for(int i = 0; i < 15; i++ )
   {
      vAccum += tex2D( ScreenSampler, IN.vTexCoord + g_vSampleOffsets[i] ) * g_fSampleWeights[i];
   }
   return vAccum;
}
// Vertical blur pixel shaderfloat4 PS_BlurV( VSOUTPUT_BLUR IN ): COLOR0
{
   // Accumulated colorfloat4 vAccum = float4( 0.0f, 0.0f, 0.0f, 0.0f );
   // Sample the taps (g_vSampleOffsets holds the texel offsets and
   // g_fSampleWeights holds the texel weights)for( int i = 0; i < 15; i++ )
   {
      vAccum += tex2D( BlurHSampler, IN.vTexCoord + g_vSampleOffsets[i] ) * g_fSampleWeights[i];
   }
   return vAccum;
}

There, the blur maps are ready. To increase the blurriness of the shadows, increase the texel sampling distance. The last step, of course, is to project the blurred map back onto the scene in screen space.


After first Gaussian pass)


After second Gaussian pass

Step 4: Rendering the shadowed scene

To project the blur map onto the scene, we render the scene as usual, but project the blur map using screen-space coordinates. We use the clip space position with some hard-coded math to generate the screen-space coordinates. The vertex and pixel shaders shown below render the scene with per-pixel lighting along with shadows.

				struct VSOUTPUT_SCENE
{
   float4 vPosition      : POSITION;
   float2 vTexCoord      : TEXCOORD0;
   float4 vProjCoord     : TEXCOORD1;
   float4 vScreenCoord   : TEXCOORD2;
   float3 vNormal        : TEXCOORD3;
   float3 vLightVec      : TEXCOORD4;
   float3 vEyeVec        : TEXCOORD5;
};
// Scene vertex shader
VSOUTPUT_SCENE VS_Scene( float4 inPosition : POSITION, float3 inNormal : NORMAL,
                         float2 inTexCoord : TEXCOORD0 )
{
   VSOUTPUT_SCENE OUT = (VSOUTPUT_SCENE)0;
   // Output the transformed position
   OUT.vPosition = mul( inPosition, g_matWorldViewProj );
   // Output the texture coordinates
   OUT.vTexCoord = inTexCoord;
   // Output the projective texture coordinates
   // (we use this to project the spot texture down onto the scene)
   OUT.vProjCoord = mul( inPosition, g_matTexture );
   // Output the screen-space texture coordinates
   OUT.vScreenCoord.x = ( OUT.vPosition.x * 0.5 + OUT.vPosition.w * 0.5 );
   OUT.vScreenCoord.y = ( OUT.vPosition.w * 0.5 - OUT.vPosition.y * 0.5 );
   OUT.vScreenCoord.z = OUT.vPosition.w;
   OUT.vScreenCoord.w = OUT.vPosition.w;
   // Get the world space vertex position
   float4 vWorldPos = mul( inPosition, g_matWorld );
   // Output the world space normal
   OUT.vNormal = mul( inNormal, g_matWorldIT );
   // Move the light vector into tangent space
   OUT.vLightVec = g_vLightPos.xyz - vWorldPos.xyz;
   // Move the eye vector into tangent space
   OUT.vEyeVec = g_vEyePos.xyz - vWorldPos.xyz;
   return OUT;
}

We add an additional spot term by projecting down a spot texture from the light. This not only simulates a spot lighting effect, it also cuts out parts of the scene outside the shadow map. The spot map is projected down using standard projective texturing.

				float4 PS_Scene( VSOUTPUT_SCENE IN ) : COLOR0
{
   // Normalize the normal, light and eye vectors
   IN.vNormal   = normalize( IN.vNormal );
   IN.vLightVec = normalize( IN.vLightVec );
   IN.vEyeVec   = normalize( IN.vEyeVec );
   // Sample the color and normal mapsfloat4 vColor  = tex2D( ColorSampler, IN.vTexCoord );
   // Compute the ambient, diffuse and specular lighting termsfloat ambient  = 0.0f;
   float diffuse  = max( dot( IN.vNormal, IN.vLightVec ), 0 );
   float specular = pow(max(dot( 2 * dot( IN.vNormal, IN.vLightVec ) * IN.vNormal
                                 - IN.vLightVec, IN.vEyeVec ), 0 ), 8 );
   if( diffuse == 0 ) specular = 0;
   // Grab the shadow termfloat fShadowTerm = tex2Dproj( BlurVSampler, IN.vScreenCoord );
   // Grab the spot termfloat fSpotTerm = tex2Dproj( SpotSampler, IN.vProjCoord );
   // Compute the final colorreturn (ambient * vColor) +
          (diffuse * vColor * g_vLightColor * fShadowTerm * fSpotTerm) +
          (specular * vColor * g_vLightColor.a * fShadowTerm * fSpotTerm);
}

That's it! We have soft edged shadows that look quite nice! The advantage of this technique is that it completely removes edge-aliasing artifacts that the shadow mapping technique suffers from. Another advantage is that one can generate soft shadows for multiple lights with a small memory overhead. When dealing with multiple lights, all you need is one shadow map per light, whereas the screen and blur buffers can be common to all the lights! Finally, this technique can be applied to both shadow maps and shadow volumes, so irrespective of the shadowing technique, you can generate soft-edged shadows with this method. One disadvantage is that this method is a wee bit fill-rate intensive due to the Gaussian filter. This can be minimized by using smaller blur buffers and slightly sacrificing the visual quality.

Here's a comparison between the approach mentioned here, 3x3 percentage closer filtering and normal shadow mapping.

Thank you for reading my article. I hope you liked it. If you have any doubts, questions or comments, please feel free to mail me at anidex@yahoo.com. Here's the source code.

References

  • Hardware Shadow Mapping. Cass Everitt, Ashu Rege and Cem Cebenoyan.
  • Hardware-accelerated Rendering of Antialiased Shadows with Shadow Maps. Stefan Brabec and Hans-Peter Seidel.

Discuss this article in the forums


Date this article was posted to GameDev.net: 1/18/2005
(Note that this date does not necessarily correspond to the date the article was written)

See Also:
Hardcore Game Programming
Shadows

------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------

介紹 ... 1

近況 ... 1

軟陰影 ... 2

那么它如何工作? ... 2

步驟一:渲染陰影映射圖( shadow map ... 2

步驟二:將帶陰影的場景渲染到緩沖中 ... 4

步驟三:對屏幕緩沖進行模糊 ... 7

步驟四:渲染帶陰影的場景 ... 11

參考文獻 ... 13

?

?

介紹

最初,動態陰影技術只有在有限的幾種情況下才能實現。但是,隨著強大的可編程圖形硬件的面世,動態陰影技術已經完全取代了以前的如 light map 這樣的靜態陰影技術及像 projected shadows 這樣的半動態陰影技術。目前兩種流行的動態陰影技術分別是 shadow volumes shadow mapping

?

近況

shadow volumes 技術是一種基于幾何形體的技術,它需要幾何體在一定方向的燈光下的輪廓去產生一個封閉的容積,然后通過光線的投射就可以決定場景的陰影部分(常常使用模板緩沖去模擬光線的投射)。這項技術是像素精確的,不會產生任何的鋸齒現象,但是與其他的技術一樣,它也有缺點。最主要的兩個問題一是極度依賴幾何形體,二是需要非常高的填充率。由于這些缺點,使得 shadow mapping 技術漸漸地變得更為流行起來。

陰影映射技術是一種圖像空間的技術,它首先在以光源位置作為視點的情況下渲染整個場景的深度信息,然后再使用這些深度信息去決定場景的哪一部分是處于陰影之中。雖然這項技術有許多優點,但它有鋸齒現象并且依賴 z- 緩沖技術。不過它的優點足以抵消它的這些缺點,因此本文選用了這項技術。

?

軟陰影

硬陰影破壞了場景的真實性,因此,我們必須仿造軟陰影來提升場景的可視效果。許多狂熱的學者都拿出了描述軟陰影技術的論文。但實際上,這些技術大部分都是很難在一個較為復雜的場景下實現實時效果。直到我們擁有了能克服這些技術局限性的硬件后,我們才真正的采用了這些方法。

本文采用了基于圖像空間的方法,并利用 shadow mapping 技術來產生軟陰影。這個方法不能產生完美的陰影,因為沒有真正的模擬出本影和半影,但它不僅僅可以解決陰影映射技術的鋸齒現象,還能以賞心悅目的軟陰影來提升場景的可視效果。

?

那么它如何工作?

首先,我們生成陰影映射圖( shadow map ),具體方法是以光源位置為視點,將場景的深度信息渲染到浮點格式的緩沖中去。然后我們不是像通常那樣在陰影下渲染場景,而是將陰影區域渲染到一幅屏幕大小的緩沖中去,這樣就可以使用 bloom filter 進行模糊并將它投射回屏幕空間中使其顯示在屏幕上。是不是很簡單?

本文只處理了聚光燈源這種情況,但可以很方便的推廣到點光源上。

下面是具體步驟:

通過將深度信息寫入浮點紋理的方法產生陰影映射圖( shadow map )。

深度比較后將場景的陰影部分渲染到定點紋理,此時不要任何的燈光。

使用 bloom filter 模糊上一步的紋理,本文采用了 separable Gaussian filter 也可用其他的方法。

在所有的光源下將上一步模糊后的紋理投射到屏幕空間中,從而得到最終的效果。

步驟 :渲染陰影映射圖( shadow map

首先,我們需要創建一個能保存屏幕深度信息的紋理。因為要把這幅紋理作為 render target ,所以我們還要創建一個表面( surface )來保存紋理的表面信息。由于深度信息值的范圍很大因此這幅紋理必須是浮點類型的。 R32F 的格式有足夠的精度可以滿足我們的需要。下面是創建紋理的代碼片斷:

				
						// Create the shadow map 
				
				
						
						
				
		
				
						
								if
						
						(
				
				 FAILED( g_pd3dDevice->CreateTexture( SHADOW_MAP_SIZE,
		
				
						???????????????????? SHADOW_MAP_SIZE, 1, D3DUSAGE_RENDERTARGET,
		
				
						???????????????????? D3DFMT_R32F, D3DPOOL_DEFAULT, &g_pShadowMap,
		
				
						???????????????????? 
						NULL ) ) )
		
				{
		
				
						?? 
						MessageBox( g_hWnd, "Unable to create shadow map!",
		
				
						?????????????? "Error", MB_OK | MB_ICONERROR );
		
				
						?? 
						
								return
						 E_FAIL;
		
				}
		
				
						?
				
		
				
						// Grab the texture's surface
				
				
						
						
				
		
				g_pShadowMap->GetSurfaceLevel( 0, &g_pShadowSurf );
				
						
						
				
		

?

為了完成陰影映射圖,我們要把場景的深度信息渲染到陰影映射圖中。為此在光源的世界 - 視點 - 投影變換矩陣( world-view-projection matrix )下渲染整個場景。下面是構造這些矩陣的代碼:

				
						// Ordinary view matrix 
				
				
						
						
				
		
				
						D3DXMatrixLookAtLH(
				
				 &matView, &vLightPos, &vLightAim, &g_vUp );
		
				
						// Projection matrix for the light
				
				
						
						
				
		
				
						D3DXMatrixPerspectiveFovLH(
				
				 &matProj, D3DXToRadian(30.0f),
		
				
						1.0f
				
				, 1.0f, 1024.0f );
		

// 實際上作者在例程中使用的是 D3DXMatrixOrthoLH( &matProj, 45.0f, 45.0f, 1.0f, //1024.0f ) 。這個函數所構造的 project 矩陣與 D3DXMatrixPerspectiveFovLH ()構造的 // 不同之處在于:它沒有透視效果。即物體的大小與視點和物體的距離沒有關系。顯然例 // 程中模擬的是平行光源( direction light ),而這里模擬的是聚光燈源( spot light 不知翻譯得對不對?)

				
						// Concatenate the world matrix with the above 
				
		
				
						// two to get the required matrix
				
				
						
						
				
		
				
						matLightViewProj
				
				 = matWorld * matView * matProj;
		
				
						?
				
		

下面是渲染場景深度的頂點渲染和像素渲染的代碼:

?

				
						// Shadow generation vertex shader
				
				
						
						
				
		
				
						
								struct
						
				
				 VSOUTPUT_SHADOW
		
				{
		
				
						?? 
						
								float4
						 vPosition??? : POSITION;
		
				
						?? 
						
								float
								? fDepth
						?????? : TEXCOORD0;
		
				};
		
				
						?
				
		
				VSOUTPUT_SHADOW VS_Shadow(float4 inPosition : POSITION )
		
				{
		
				
						?? 
						
								// Output struct
						
						
						
				
		
				
						?? VSOUTPUT_SHADOW OUT = (VSOUTPUT_SHADOW)0;
		
				
						?? 
						
								// Output the transformed position
						
						
						
				
		
				
						?? OUT.vPosition = mul( inPosition, g_matLightViewProj );
		
				
						?? 
						
								// Output the scene depth
						
						
						
				
		
				
						?? OUT.fDepth = OUT.vPosition.z;
		
				
						?? 
						
								return
						 OUT;
				
						
						
				
		
				}
		

這里我們將頂點的位置與變換矩陣相乘,并將變換后的 z 值作為深度。在像素渲染中將深度值以顏色( color )的方式輸出。

				
						
								float4
						
						
								? PS
				
				_Shadow( VSOUTPUT_SHADOW IN ) : COLOR0
		
				{
		
				
						?? 
						
								// Output the scene depth
						
						
						
				
		
				
						?? 
						
								return
						
						 float4( IN.fDepth, IN.fDepth, IN.fDepth, 1.0f );
		
				}
		

瞧,我們完成了陰影映射圖,下面就是以顏色方式輸出的陰影映射圖,深藍色部分表明較小的深度值,淺藍色部分表明較大的深度值。

步驟二:將帶陰影的場景渲染到緩沖中

下面,我們要把場景的帶陰影的部分渲染到并不立即顯示的緩沖中,使我們可以進行模糊處理,然后再將它投射回屏幕。首先把場景的陰影部分渲染到一幅屏幕大小的定點紋理中。

				
						// Create the screen-sized buffer map 
				
				
						
						
				
		
				
						
								if
						
						(
				
				 FAILED( g_pd3dDevice->CreateTexture( SCREEN_WIDTH,
		
				
						??????????? SCREEN_HEIGHT, 1, D3DUSAGE_RENDERTARGET,
		
				
						????? 
						??????D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
		
				
						??????????? &g_pScreenMap, NULL ) ) )
		
				{
		
				
						?? 
						MessageBox( g_hWnd, "Unable to create screen map!",
		
				
						?????????????? "Error", MB_OK | MB_ICONERROR );
		
				
						?? 
						
								return
						 E_FAIL;
		
				}
		
				
						// Grab the texture's surface
				
				
						
						
				
		
				g_pScreenMap->GetSurfaceLevel( 0, & g_pScreenSurf );
		

為了獲得投影紋理坐標( projective texture coordinates ),我們需要一個紋理矩陣,作用是把投影空間( projection space )中的位置變換到紋理空間( texture space )中去。

				
						// Generate the texture matrix
				
				
						
						
				
		
				
						
								float
						
				
				 fTexOffs = 0.5 + (0.5 / (float)SHADOW_MAP_SIZE);
		
				D3DXMATRIX matTexAdj(0.5f,???? 0.0f,???? ?0.0f, ?0.0f,
		
				
						??? 
						??????????????????
						????
						0.0f,???? -0.5f,?? ??0.0f, ?0.0f,
		
				
						????????????????????? 
						????
						0.0f,??? ??0.0f,???? 1.0f, ?0.0f,
		
				
						????????????????????? 
						????
						fTexOffs, fTexOffs, 0.0f, 1.0f );
		
				//
				這個矩陣是把
				projection space
				中范圍為
				[-1
				
				1]
				
				x,y
				坐標值轉換到紋理空間中
				
						
						
				
		
				//[0
				
				1]
				的范圍中去。注意
				y
				軸的方向改變了。那個
				(0.5 / (float)SHADOW_MAP_SIZE)
		
				//
				的值有什么作用我還不清楚,原文也沒有說明。
				
						
						
				
		
				
						?
				
		
				
						matTexture
				
				 = matLightViewProj * matTexAdj;
		
				
						?
				
		

我們像往常那樣通過深度的比較來獲得陰影因數,但隨后并不是像平常那樣輸出整個照亮了的場景,我們只輸出陰影因數。下面的頂點渲染和像素渲染完成這個工作。

				
						// Shadow mapping vertex shader
				
				
						
						
				
		
				
						
								struct
						
				
				 VSOUTPUT_UNLIT
		
				{
		
				
						?? 
						
								float4
						 vPosition?? : POSITION;
		
				
						?? 
						
								float4
						 vTexCoord?? : TEXCOORD0;
		
				
						?? 
						
								float
								? fDepth
						????? : TEXCOORD1;
		
				};
		
				
						?
				
		
				VSOUTPUT_UNLIT VS_Unlit(float4 inPosition : POSITION )
		
				{
		
				
						?? 
						
								// Output struct
						
						
						
				
		
				
						?? VSOUTPUT_UNLIT OUT = (VSOUTPUT_UNLIT)0;
		
				
						?
				
		
				
						?? 
						
								// Output the transformed position
						
						
						
				
		
				
						?? OUT.vPosition = mul( inPosition, g_matWorldViewProj );
		
				
						?
				
		
				
						?? 
						
								// Output the projective texture coordinates
						
						
						
				
		
				
						?? OUT.vTexCoord = mul( inPosition, g_matTexture );
		
				
						?
				
		
				
						?? 
						
								// Output the scene depth
						
						
						
				
		
				
						?? OUT.fDepth = mul( inPosition, g_matLightViewProj ).z;
		
				
						?
				
		
				
						?? 
						
								return
						 OUT;
		
				}
		

我們采用 percentage closer filtering (PCF) 來平滑鋸齒邊緣。為了完成“ PCF ”,我們簡單的對周圍 8 個紋理點進行采樣,并取得它們深度比較的平均值。

				
						// Shadow mapping pixel shader
				
		
				
						//
				
				
						注意這里采用的是
				
				
						tex2Dproj
				
				
						()函數以及轉換到紋理空間的向量(x,y,z,w)對紋理進//行采樣。這與d3d9sdkshadowmap例子用tex2D()及向量(x,y)進行采樣不同。具體//區別及原因很容易從程序中看出,我就不再啰嗦了。
				
				
						
						
				
		
				
						
								float4
						
						
								? PS
				
				_Unlit( VSOUTPUT_UNLIT IN ) : COLOR0
		
				{
		
				
						?? 
						
								// Generate the 9 texture co-ordinates for a 3x3 PCF kernel
						
						
						
				
		
				
						?? 
						
								float4
						 vTexCoords[9];
		
				
						?? 
						
								// Texel size
						
						
						
				
		
				
						?? 
						
								float
						 fTexelSize = 1.0f / 1024.0f;
		
				
						?
				
		
				
						?? 
						
								// Generate the tecture co-ordinates for the specified depth-map size
						
				
		
				
						
								??// 4 3 5
				
		
				
						
								?? // 1 0 2
				
		
				
						
								?? // 7 6 8
				
				
						
						
				
		
				
						?? 
						VTexCoords[0] = IN.vTexCoord;
		
				
						?? 
						vTexCoords[1] = IN.vTexCoord + float4( -fTexelSize, 0.0f, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[2] = IN.vTexCoord + float4(? fTexelSize, 0.0f, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[3] = IN.vTexCoord + float4( 0.0f, -fTexelSize, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[6] = IN.vTexCoord + float4( 0.0f,? fTexelSize, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[4] = IN.vTexCoord + float4( -fTexelSize, -fTexelSize, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[5] = IN.vTexCoord + float4(? fTexelSize, -fTexelSize, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[7] = IN.vTexCoord + float4( -fTexelSize,? fTexelSize, 0.0f, 0.0f );
		
				
						?? 
						vTexCoords[8] = IN.vTexCoord + float4(? fTexelSize,? fTexelSize, 0.0f, 0.0f );
		
				
						?? 
						
								// Sample each of them checking whether the pixel under test is shadowed or not
						
						
						
				
		
				
						?? 
						
								float
						 fShadowTerms[9];
		
				
						?? 
						
								float
						 fShadowTerm = 0.0f;
		
				
						?? 
						
								for(
						int i = 0; i < 9; i++ )
		
				
						?? {
		
				
						????? 
						
								float
						 A = tex2Dproj( ShadowSampler, vTexCoords[i] ).r;
		
				
						????? 
						
								float
						 B = (IN.fDepth - 0.1f);
		
				
						?
				
		
				
						????? 
						
								// Texel is shadowed
						
						
						
				
		
				
						????? 
						fShadowTerms[i] = A < B ? 0.0f :1.0f;
		
				
						????? 
						fShadowTerm
						???? += fShadowTerms[i];
		
				
						?? }
		
				
						?? 
						
								// Get the average
						
						
						
				
		
				
						?? 
						fShadowTerm /= 9.0f;
		
				
						?? 
						
								return
						 fShadowTerm;
		
				}
		

屏幕緩沖完成了,我們還需要進行模糊工作。

步驟三:對屏幕緩沖進行模糊

我們采用 seperable gaussian filter 模糊屏幕緩沖。但我們也可以用 Poisson filter 。這次的 render targets A8R8G8B8 的紋理和相關的表面。我們需要兩個 render targets ,一個進行水平階段,一個進行垂直階段。

				
						// Create the blur maps
				
				
						
						
				
		
				
						
								for
						
						(
				
				
						int i = 0; i < 2; i++ )
		
				{
		
				
						?? 
						
								if( FAILED( g_pd3dDevice->CreateTexture( SCREEN_WIDTH,
		
				
						?????????????????????????? SCREEN_HEIGHT, 1, D3DUSAGE_RENDERTARGET,
		
				
						?????????????????????????? D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
		
				
						?????????????????????????? &g_pBlurMap[i], NULL ) ) )
		
				
						?? {
		
				
						????? 
						MessageBox( g_hWnd, "Unable to create blur map!",
		
				
						????????????????? "Error", MB_OK | MB_ICONERROR );
		
				
						????? 
						
								return
						 E_FAIL;
		
				
						?? }
		
				
						? 
						
								// Grab the texture's surface
						
						
						
				
		
				
						?? g_pBlurMap[i]->GetSurfaceLevel( 0, & g_pBlurSurf[i] );
		
				}
		

我們用下面的代碼生成 15 個高斯偏移量( Gaussian offsets )及他們的權重( corresponding weights )。

				
						
								float
						
				
				 GetGaussianDistribution( float x, float y, float rho )
		
				{
		
				
						?? 
						
								float
						 g = 1.0f / sqrt( 2.0f * 3.141592654f * rho * rho );
		
				
						?? 
						
								return
						 g * exp( -(x * x + y * y) / (2 * rho * rho) );
		
				}
		
				
						?
				
		
				
						
								void
						
				
				 GetGaussianOffsets( bool bHorizontal,
		
				
						???????????????????????? D3DXVECTOR2 vViewportTexelSize,
		
				
						???????????????????????? D3DXVECTOR2* vSampleOffsets,
		
				
						???????????????????????? 
						
								float
						* fSampleWeights )
		
				{
		
				
						?? 
						
								// Get the center texel offset and weight
						
						
						
				
		
				
						?? 
						fSampleWeights[0] = 1.0f * GetGaussianDistribution( 0, 0, 2.0f );
		
				
						?? 
						vSampleOffsets[0] = D3DXVECTOR2( 0.0f, 0.0f );
		
				
						?? 
						
								// Get the offsets and weights for the remaining taps
						
						
						
				
		
				
						?? 
						
								if( bHorizontal )
		
				
						?? {
		
				
						????? 
						
								for(
						int i = 1; i < 15; i += 2 )
		
				
						????? {
		
				
						???????? 
						vSampleOffsets[i + 0] = D3DXVECTOR2( i * vViewportTexelSize.x, 0.0f );
		
				
						???????? 
						vSampleOffsets[i + 1] = D3DXVECTOR2( -i * vViewportTexelSize.x, 0.0f );
		
				
						???????? 
						fSampleWeights[i + 0] = 2.0f * GetGaussianDistribution( float(i + 0), 0.0f, 3.0f );
		
				
						???????? 
						fSampleWeights[i + 1] = 2.0f * GetGaussianDistribution( float(i + 1), 0.0f, 3.0f );
		
				
						????? }
		
				
						?? }
		
				
						?? 
						
								else
						
						
						
				
		
				
						???{
		
				
						????? 
						
								for(
						int i = 1; i < 15; i += 2 )
		
				
						????? {
		
				
						???????? 
						vSampleOffsets[i + 0] = D3DXVECTOR2( 0.0f, i * vViewportTexelSize.y );
		
				
						???????? 
						vSampleOffsets[i + 1] = D3DXVECTOR2( 0.0f, -i * vViewportTexelSize.y );
		
				
						???????? 
						fSampleWeights[i + 0] = 2.0f * GetGaussianDistribution( 0.0f, float(i + 0), 3.0f );
		
				
						???????? 
						fSampleWeights[i + 1] = 2.0f * GetGaussianDistribution( 0.0f, float(i + 1), 3.0f );
		
				
						????? }
		
				
						?? }
		

}

為了模糊屏幕緩沖,我們將模糊映射圖( blur map )作為 render target ,使用下面的頂點渲染和像素渲染代碼渲染一個與屏幕等大的方塊。

// 作者在程序中預先定義的屏幕大小是1024 * 768,而隨后定義的與屏幕等大的方塊為:

// pVertices[0].p = D3DXVECTOR4( 0.0f, 0.0f, 0.0f, 1.0f );

//? pVertices[1].p = D3DXVECTOR4( 0.0f, 768 / 2, 0.0f, 1.0f );

//? pVertices[2].p = D3DXVECTOR4( 1024 / 2, 0.0f, 0.0f, 1.0f );

//? pVertices[3].p = D3DXVECTOR4( 1024 / 2, 768 / 2, 0.0f, 1.0f );

//? 這種方法與d3dsdkHDRLight中獲得render target width and height 然后再構造的 //? 方法不同

// svQuad[0].p = D3DXVECTOR4(-0.5f, -0.5f, 0.5f, 1.0f);

// svQuad[1].p = D3DXVECTOR4(Width-0.5f, -0.5f, 0.5f, 1.0f);

// svQuad[2].p = D3DXVECTOR4(-0.5f, Height-0.5f, 0.5f, 1.0f);

// svQuad[3].p = D3DXVECTOR4(Width-0.5f,fHeight-0.5f, 0.5f, 1.0f);

// 而一般定義的窗口大小往往與從render target獲得的width and height不相同。

// 而二者的fvf都是D3DFVF_XYZRHW。這兩種方法有什么區別我一直沒想通。

?

				
						// Gaussian filter vertex shader
				
				
						
						
				
		
				
						
								struct
						
				
				 VSOUTPUT_BLUR
		
				{
		
				
						?? 
						
								float4
						 vPosition??? : POSITION;
		
				
						?? 
						
								float2
						 vTexCoord??? : TEXCOORD0;
		
				};
		
				
						?
				
		
				VSOUTPUT_BLUR VS_Blur(float4 inPosition : POSITION, float2 inTexCoord : TEXCOORD0 )
		
				{
		
				
						?? 
						
								// Output struct
						
						
						
				
		
				
						?? VSOUTPUT_BLUR OUT = (VSOUTPUT_BLUR)0;
		
				
						?? 
						
								// Output the position
						
						
						
				
		
				
						?? OUT.vPosition = inPosition;
		
				
						?? 
						
								// Output the texture coordinates
						
						
						
				
		
				
						?? OUT.vTexCoord = inTexCoord;
		
				
						?? 
						
								return
						 OUT;
		
				}
		
				
						// Horizontal blur pixel shader
				
				
						
						
				
		
				
						float4 
				
				PS_BlurH( VSOUTPUT_BLUR IN ): COLOR0
		
				{
		
				
						?? 
						
								// Accumulated color
						
						
						
				
		
				
						?? 
						
								float4
						 vAccum = float4( 0.0f, 0.0f, 0.0f, 0.0f );
		
				
						?? 
						
								// Sample the taps (g_vSampleOffsets holds the texel offsets
						
				
		
				
						
								?? // and g_fSampleWeights holds the texel weights)
				
				
						
						
				
		
				
						?? 
						
								for(
						int i = 0; i < 15; i++ )
		
				
						?? {
		
				
						????? 
						vAccum += tex2D( ScreenSampler, IN.vTexCoord + g_vSampleOffsets[i] ) * g_fSampleWeights[i];
		
				
						?? }
		
				
						?? 
						
								return
						 vAccum;
		
				}
		
				
						?
				
		
				
						// Vertical blur pixel shader
				
				
						
						
				
		
				
						float4
				
				 PS_BlurV( VSOUTPUT_BLUR IN ): COLOR0
		
				{
		
				
						?? 
						
								// Accumulated color
						
						
						
				
		
				
						?? 
						
								float4
						 vAccum = float4( 0.0f, 0.0f, 0.0f, 0.0f );
		
				
						?? 
						
								// Sample the taps (g_vSampleOffsets holds the texel offsets and
						
				
		
				
						
								?? // g_fSampleWeights holds the texel weights)
				
				
						
						
				
		
				
						?? 
						
								for(
						int i = 0; i < 15; i++ )
		
				
						?? {
		
				
						????? 
						vAccum += tex2D( BlurHSampler, IN.vTexCoord + g_vSampleOffsets[i] ) * g_fSampleWeights[i];
		
				
						?? }
		
				
						?? 
						
								return
						 vAccum;
		
				}
		

這里,模糊映射圖已經完成了,為了增加陰影的模糊程度,增加了紋理上點的采樣距離。最后一步自然是將模糊后的紋理圖投射回屏幕空間使其顯示在屏幕上。

After first Gaussian pass

After second Gaussian pass

步驟四:渲染帶陰影的場景

為了將模糊后的紋理投射到屏幕上,我們像平常那樣渲染場景,但投影模糊后的紋理時要使用屏幕空間的坐標。我們使用裁剪空間的坐標和一些數學方法來產生屏幕空間的坐標。下面的頂點渲染和像素渲染將完成這個工作:

				
						
								struct
						
				
				 VSOUTPUT_SCENE
		
				{
		
				
						?? 
						
								float4
						 vPosition????? : POSITION;
		
				
						?? 
						
								float2
						 vTexCoord????? : TEXCOORD0;
		
				
						?? 
						
								float4
						 vProjCoord???? : TEXCOORD1;
		
				
						?? 
						
								float4
						 vScreenCoord?? : TEXCOORD2;
		
				
						?? 
						
								float3
						 vNormal??????? : TEXCOORD3;
		
				
						?? 
						
								float3
						 vLightVec????? : TEXCOORD4;
		
				
						?? 
						
								float3
						 vEyeVec??????? : TEXCOORD5;
		
				};
		
				
						// Scene vertex shader
				
				
						
						
				
		
				VSOUTPUT_SCENE VS_Scene(float4 inPosition : POSITION,
		
				
						???????????????????????? 
						
								float3
						 inNormal : NORMAL,
		
				
						???????????????????????? 
						
								float2
						 inTexCoord : TEXCOORD0 )
		
				{
		
				
						?? VSOUTPUT_SCENE OUT = (VSOUTPUT_SCENE)0;
		
				
						?? 
						
								// Output the transformed position
						
						
						
				
		
				
						?? OUT.vPosition = mul( inPosition, g_matWorldViewProj );
		
				
						?
				
		
				
						?? 
						
								// Output the texture coordinates
						
						
						
				
		
				
						?? OUT.vTexCoord = inTexCoord;
		
				
						?
				
		
				
						?? 
						
								// Output the projective texture coordinates (we use this
						
				
		
				
						
								?? // to project the spot texture down onto the scene)
				
		
				
						
								?? // 
				
				
						這個是用來產生
				
				
						light map
				
				
						的紋理坐標的。最終效果圖中地面上光照效果就是用
				
				
						
								
								
						
				
		
				
						//? 
				
				
						這個坐標配合上一幅這樣的
				
				
						light map
				
				
						實現的。
				
				
						
								
								
						
				
		
				
						
								
										
										
								
						
				
				
						
						
				
		
				
						?? OUT.vProjCoord = mul( inPosition, g_matTexture );
		
				
						?
				
		
				
						?? 
						
								// Output the screen-space texture coordinates
						
				
		
				
						
								?? //
				
				
						
						
				
				
						這個就是將
				
				裁剪空間的坐標轉換到屏幕空間的坐標,方法和
				
						
				
				裁剪空間的坐標轉換
		
				//? 
				紋理空間的坐標的方法很相似。
				
						
						
				
		
				
						?? OUT.vScreenCoord.x = ( OUT.vPosition.x * 0.5 + OUT.vPosition.w * 0.5 );
		
				
						?? OUT.vScreenCoord.y = ( OUT.vPosition.w * 0.5 - OUT.vPosition.y * 0.5 );
		
				
						?? OUT.vScreenCoord.z = OUT.vPosition.w;
		
				
						?? OUT.vScreenCoord.w = OUT.vPosition.w;
		
				
						?
				
		
				
						?? 
						
								// Get the world space vertex position
						
						
						
				
		
				
						?? 
						float4 vWorldPos = mul( inPosition, g_matWorld );
		
				
						?
				
		
				
						?? 
						
								// Output the world space normal
						
						
						
				
		
				
						?? OUT.vNormal = mul( inNormal, g_matWorldIT );
		
				
						?
				
		
				
						?? 
						
								// Move the light vector into tangent space
						
						
						
				
		
				
						?? OUT.vLightVec = g_vLightPos.xyz - vWorldPos.xyz;
		
				
						?
				
		
				
						?? 
						
								// Move the eye vector into tangent space
						
						
						
				
		
				
						?? OUT.vEyeVec = g_vEyePos.xyz - vWorldPos.xyz;
		
				
						?? 
						
								return
						 OUT;
		
				}
		
				
						float4
				
				 PS_Scene( VSOUTPUT_SCENE IN ) : COLOR0
		
				{
		
				
						?? 
						
								// Normalize the normal, light and eye vectors
						
						
						
				
		
				
						?? IN.vNormal?? = normalize( IN.vNormal );
		
				
						?? IN.vLightVec = normalize( IN.vLightVec );
		
				
						?? IN.vEyeVec?? = normalize( IN.vEyeVec );
		
				
						?? 
						
								// Sample the color and normal maps
						
						
						
				
		
				
						?? 
						
								float4
						 vColor? = tex2D( ColorSampler, IN.vTexCoord );
		
				
						?? 
						
								// Compute the ambient, diffuse and specular lighting terms
						
						
						
				
		
				
						?? 
						
								float
						 ambient? = 0.0f;
		
				
						?? 
						
								float
						 diffuse? = max( dot( IN.vNormal, IN.vLightVec ), 0 );
		
				
						?? 
						
								float
						 specular = pow(max(dot( 2 * dot( IN.vNormal, IN.vLightVec ) * IN.vNormal
		
				
						???????????????????????????????? - IN.vLightVec, IN.vEyeVec ), 0 ), 8 );
		
				
						?? 
						
								if( diffuse == 0 ) specular = 0;
		
				
						?? 
						
								// Grab the shadow term
						
						
						
				
		
				
						?? 
						
								float
						 fShadowTerm = tex2Dproj( BlurVSampler, IN.vScreenCoord );
		
				
						?? 
						
								// Grab the spot term
						
						
						
				
		
				
						?? 
						
								float
						 fSpotTerm = tex2Dproj( SpotSampler, IN.vProjCoord );
		
				
						?? 
						
								// Compute the final color
						
						
						
				
		
				
						?? 
						
								return
						 (ambient * vColor) +
		
				
						????????? (diffuse * vColor * g_vLightColor * fShadowTerm * fSpotTerm) +
		
				
						????????? (specular * vColor * g_vLightColor.a * fShadowTerm * fSpotTerm);
		
				}
		

終于完成了。看上去不錯。該技術的優點一是解決了鋸齒問題,二是在多光源,低內存下實現了軟陰影。另外該技術與陰影生成方法無關,可以很容易的在 shadow volumes 技術中采用這項技術。缺點是由于進行了模糊處理而需要一些填充率。

下面是不同階段的效果比較圖:


posted on 2006-12-17 22:39 zmj 閱讀(1772) 評論(0)  編輯 收藏 引用


只有注冊用戶登錄后才能發表評論。
網站導航: 博客園   IT新聞   BlogJava   博問   Chat2DB   管理


青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            欧美一区二区成人| 蜜臀久久99精品久久久画质超高清 | 亚洲午夜精品久久久久久app| 欧美激情网友自拍| 一区二区精品在线观看| 99ri日韩精品视频| 国产精品免费电影| 久久久综合激的五月天| 久久人人爽人人爽爽久久| 亚洲丶国产丶欧美一区二区三区 | 欧美亚洲免费电影| 午夜精品亚洲| 一区二区三区在线视频播放| 欧美成人一品| 欧美人妖在线观看| 欧美一级视频一区二区| 欧美一区二区视频观看视频| 在线观看亚洲精品视频| 91久久午夜| 国产精品九九| 玖玖综合伊人| 欧美日韩精品一区二区三区四区| 午夜精品久久久| 久久久久久久激情视频| 一区二区三区.www| 欧美一区二区精品久久911| 亚洲国产日韩欧美在线图片 | 久久综合久久久| 欧美另类变人与禽xxxxx| 亚洲女同同性videoxma| 噜噜噜在线观看免费视频日韩| 日韩视频在线播放| 性久久久久久久久久久久| 亚洲精品国产拍免费91在线| 亚洲女性裸体视频| 亚洲人成高清| 欧美亚洲视频在线看网址| 亚洲精品乱码久久久久久按摩观| 亚洲图片在线| 亚洲精品久久久久| 久久成人免费电影| 亚洲一二三区在线观看| 久久综合狠狠综合久久激情| 亚洲主播在线观看| 欧美大片免费| 久热精品视频在线| 国产精品天天看| 亚洲精品系列| 亚洲国产精品高清久久久| 欧美一级在线播放| 亚洲淫性视频| 欧美激情1区2区| 鲁大师影院一区二区三区| 国产乱人伦精品一区二区| 亚洲激情网址| 最新国产乱人伦偷精品免费网站 | 亚洲欧洲日韩在线| 曰韩精品一区二区| 久久成人18免费观看| 欧美专区在线观看| 国产精品视频你懂的| 99国产精品| 一区二区三区回区在观看免费视频| 久久影院午夜论| 老牛影视一区二区三区| 国产综合av| 亚洲国产精品黑人久久久| 欧美在线视频在线播放完整版免费观看 | 亚洲黄一区二区三区| 久久欧美中文字幕| 免费观看成人| 亚洲国内高清视频| 欧美大片国产精品| 亚洲人成网站精品片在线观看| 亚洲高清久久久| 噜噜噜躁狠狠躁狠狠精品视频| 欧美大片18| 亚洲美女网站| 欧美日韩亚洲一区二| 一本一本久久| 欧美在线91| 国内免费精品永久在线视频| 久久久999精品| 欧美韩日亚洲| 在线天堂一区av电影| 国产精品毛片a∨一区二区三区|国| 亚洲午夜久久久久久久久电影院 | 久久久久综合一区二区三区| 一区二区在线视频观看| 久久噜噜噜精品国产亚洲综合| 鲁鲁狠狠狠7777一区二区| 91久久久亚洲精品| 欧美日韩精选| 校园春色综合网| 免费观看亚洲视频大全| 夜夜爽av福利精品导航| 国产精品久久二区| 久久精品亚洲一区二区三区浴池| 欧美福利专区| 亚洲欧美日韩视频一区| 黄色成人在线网站| 欧美日韩精品欧美日韩精品| 欧美一级专区| 亚洲日本一区二区三区| 久久国产一区二区| 亚洲美女在线视频| 国产日韩欧美不卡| 欧美日韩成人综合| 欧美一区二区大片| 亚洲精品社区| 老鸭窝毛片一区二区三区| 一本色道精品久久一区二区三区| 国产麻豆日韩| 欧美激情久久久久| 久久久777| 在线亚洲欧美视频| 欧美国产日韩一二三区| 午夜精品久久久久久久蜜桃app| 影音先锋亚洲电影| 国产精品一区二区久久国产| 另类人畜视频在线| 亚洲欧美日韩中文播放| 最新日韩av| 久久天天狠狠| 西瓜成人精品人成网站| 一本一本久久a久久精品综合麻豆| 国产一区在线观看视频| 国产精品高潮久久| 欧美日本精品一区二区三区| 久久青青草原一区二区| 欧美一级二区| 亚洲欧美中日韩| 亚洲午夜国产成人av电影男同| 亚洲丶国产丶欧美一区二区三区 | 久久黄色影院| 亚洲欧美日韩一区二区| 一区二区欧美日韩| 91久久精品国产| 在线播放不卡| 伊人久久婷婷色综合98网| 国产日韩av一区二区| 国产精品久久久久久久久久尿| 欧美日本高清| 欧美无乱码久久久免费午夜一区| 欧美激情视频一区二区三区免费| 久久综合久久久久88| 久久久www成人免费精品| 欧美一区二区三区四区高清 | 美日韩精品免费观看视频| 久久久伊人欧美| 久久av在线看| 久久久久久9999| 美女精品在线观看| 欧美成人国产| 亚洲国产精品小视频| 亚洲激情成人| 日韩一二三在线视频播| 99精品国产在热久久| 亚洲图片自拍偷拍| 性欧美暴力猛交69hd| 久久精品女人| 欧美成人精品1314www| 欧美激情视频一区二区三区免费| 欧美日韩精品在线视频| 国产精品视频专区| 黄色小说综合网站| 亚洲日本中文| 亚洲一区二区三区激情| 欧美一区二区网站| 免费不卡视频| 亚洲精品少妇30p| 亚洲欧美在线视频观看| 卡一卡二国产精品| 欧美亚一区二区| 国产日本精品| 亚洲人久久久| 欧美一区二区三区男人的天堂| 久久美女艺术照精彩视频福利播放| 欧美大学生性色视频| 亚洲免费观看高清完整版在线观看| 亚洲一区二区视频在线| 久久久蜜桃精品| 欧美视频官网| 永久555www成人免费| 一二三区精品| 美女精品自拍一二三四| 亚洲视频久久| 欧美成年视频| 国产自产女人91一区在线观看| 日韩午夜在线电影| 久久精品国产精品亚洲综合| 亚洲国产天堂久久国产91| 亚洲小说欧美另类婷婷| 免费在线看成人av| 国产一区二区三区四区hd| 亚洲美女视频在线免费观看| 久久综合成人精品亚洲另类欧美| 99精品福利视频| 免费成人性网站| 国产一区二区三区久久悠悠色av|