• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>

            concentrate on c/c++ related technology

            plan,refactor,daily-build, self-discipline,

              C++博客 :: 首頁 :: 聯系 :: 聚合  :: 管理
              37 Posts :: 1 Stories :: 12 Comments :: 0 Trackbacks

            常用鏈接

            留言簿(9)

            我參與的團隊

            搜索

            •  

            最新評論

            閱讀排行榜

            評論排行榜

            #

            only one vertex shader can be active at one time.
            every vertex shader- driven program must run the following steps:
            1) check for vertex shader support by checking the D3DCAPS8:VertexShaderVersion field.
            D3DVS_VERSION(X,Y) shader x.y.
            if(pCaps->VertexShaderVersion < D3DVS_VERSION(1,1))
            {
            return E_FAIL;
            }
            here to judge whether the vertex shader is suited for shader1.1.
            the vertex shader version is in the D3DCAPS8 structure.
            2) declaration of the vertex shader with D3DVSD_* macros, to map vertex buffers streams to input registers.
            you must declare a vertex shader before using it,
            SetStreamSource: bind a vertex buffer to a device data stream. D3DVSD_STREAM.
            D3DVSD_REG:bind a single vertex register to a vertex element/property from vertex stream.
            3) setting the vertex constant register with SetVertexShaderConstant.
            you fill the vertex shader constant registers with SetVertexShaderConstant, and  get the vertex shader constant registers with GetVertexShaderConstant.
            D3DVSD_CONSTANT: used in vertex shader declaration, and it can only be used once.
            SetVertexShaderConstant: it can be used in every DrawPrimitive* calls.
            4) compile previously written vertex shader with D3DXAssembleShader*.
            different instructions include:
            add  dest src1 src2  add src1 and src2 together.
            dp3  dest src1 src2 dest.x = dest.y = dest.z = dest.w = (src1.x * src2.x ) + (src1.y * src2.y) + (src1.z* src2.z)
            dp4  dest src1 src2 dest.w =  (src1.x * src2.x ) + (src1.y * src2.y) + (src1.z* src2.z) +(src1.w* src2.w) and dest.x dest.y, dest.z is not used.
            dst dest src1 src2  dest.x = 1; dest.y = src1.y * src2.y;dest.z = src1.z;dest.w = src2.w; it is useful to calculate standard attentuation.
            expp dest, src.w float tmp = (float)pow(2, w); WORD tmpd = *(DWORD*)&tmp & 0xffffff00; dest.z = *(float*)&tmpd;
            lit dest, src

            Calculates lighting coefficients from two dot products and a power.
            ---------------------------------------------
            To calculate the lighting coefficients, set up the registers as shown:

            src.x = N*L ; The dot product between normal and direction to light
            src.y = N*H ; The dot product between normal and half vector
            src.z = ignored ; This value is ignored
            src.w = specular power ; The value must be between ?28.0 and 128.0
            logp dest src.w 
             float tmp = (float)(log(v)/log(2)); 
             DWORD tmpd = *(DWORD*)&tmp & 0xffffff00; 
             dest.z = *(float*)&tmpd;
            mad dest src1 src2 src3 dest = (src1 * src2) + src3
            max dest src1 src2 dest = (src1 >= src2)?src1:src2
            min dest src1 src2 dest = (src1 < src2)?src1:src2
            mov dest, src move
            mul dest, src1, src2  set dest to the component by component product of src1 and src2
            nop nothing
            rcp dest, src.w
            if(src.w == 1.0f)
            {
              dest.x = dest.y = dest.z = dest.w = 1.0f;
            }
            else if(src.w == 0)
            {
              dest.x = dest.y = dest.z = dest.w = PLUS_INFINITY();
            }
            else
            {
              dest.x = dest.y = dest.z = m_dest.w = 1.0f/src.w;
            }
            rsq dest, src

            reciprocal square root of src
            (much more useful than straight 'square root'):

            float v = ABSF(src.w);
            if(v == 1.0f)
            {
              dest.x = dest.y = dest.z = dest.w = 1.0f;
            }
            else if(v == 0)
            {
              dest.x = dest.y = dest.z = dest.w = PLUS_INFINITY();
            }
            else
            {
              v = (float)(1.0f / sqrt(v));
              dest.x = dest.y = dest.z = dest.w = v;
            }
            sge dest, src1, src2 dest = (src1 >=src2) ? 1 : 0
            slt dest, src1, src2 dest = (src1 <src2) ? 1 : 0

            The Vertex Shader ALU is a multi-threaded vector processor that operates on quad-float data. It consists of two functional units. The SIMD Vector Unit is responsible for the mov, mul, add, mad, dp3, dp4, dst, min, max, slt and sge instructions. The Special Function Unit is responsible for the rcp, rsq, log, exp and lit instructions.

            rsq is used in normalizing vectors to be used in lighting equations.
            The exponential instruction expp can be used for fog effects, procedural noise generation.
            A log function can be the inverse of a exponential function, means it undoes the operation of the exponential function.

            The lit instruction deals by default with directional lights. It calculates the diffuse & specular factors with clamping based on N * L and N * H and the specular power. There is no attenuation involved, but you can use an attenuation level separately with the result of lit by using the dst instruction. This is useful for constructing attenuation factors for point and spot lights.

            The min and max instructions allow for clamping and absolute value computation.
            Using the Input Registers

            The 16 input registers can be accessed by using their names v0 to v15. Typical values provided to the input vertex registers are:

            • Position(x,y,z,w)
            • Diffuse color (r,g,b,a) -> 0.0 to +1.0
            • Specular color (r,g,b,a) -> 0.0 to +1.0
            • Up to 8 Texture coordinates (each as s, t, r, q or u, v , w, q) but normally 4 or 6, dependent on hardware support
            • Fog (f,*,*,*) -> value used in fog equation
            • Point size (p,*,*,*)

            The input registers are read-only. Each instruction may access only one vertex input register. unspecified components of the input registers default to 0.0 for the .x, .y, .z and 1.0 for the components w.

            all data in an input register remains persistent throughout the vertex shader execution and even longer. that means they retain their data longer than the life-time of a vertex shader, so it is possible to re-use the data of the input registers in the next vertex shader.

            Using the Constant Registers

            Typical uses for the constant registers include:

            • Matrix data: quad-floats are typically one row of a 4x4 matrix
            • Light characteristics, (position, attenuation etc)
            • Current time
            • Vertex interpolation data
            • Procedural data

            the constant registers are read-only from the perspective of the vertex shader, whereas the application can read and write into the constant registers.they can be reused just as input registers.
            this allows an application to avoid making redundant SetVertexShaderConstant() calls.
            Using the Address Register
            you access the address registers with a0 to an(more than one address register should be available in vertex shader versions higher than 1.1)
            Using the Temporary Registers
            you can access 12 temporary registers using r0 to r11.
            each temporary register has single write and triple read access. therefore an instruction could have the same temporary register as a source three times, vertex shaders can not read a value from a temporary register before writing to it. if you try to read a temporary register that was not filled with a value, the API will give you an error messge while creating the vertex shader(CreateVertexShader)
            Using the Output Registers
            there are up to 13 write-only output registers that can be accessed using the following register names. they are defined as the inputs to the rasterizer and the name of each registers is preceded by a lower case 'o'. the output registers are named to suggest their use by pixel shaders.
            every vertex shader must write at least to one component of oPos, or you will get an error message by the assembler.
            swizzling and masking
            if you use the input, constant and temporary registers as source registers, you can swizzle the .x, .y, .z and .w values independently of each other.
            if you use the output and temporary registers as destination registers you can use the .x, .y, .z and .w values as write-masks.
            component modifier description
            R.[x].[y].[z].[w]     Destination mask
            R.xwzy                  source swizzle
            - R                        source negation 
            Guidelines for writing the vertex shaders
            the most important restrictions you should remember when writing vertex shaders are the following:
            they must write to at least one component of the output register oPos.
            there is a 128 instruction limit
            every instruction may souce no more than one constant register,e.g, add r0, c4,c3 will fail.
            every instruction may souce no more than one input register, e.g. add r0,v1,v2 will fail.
            there are no c-like conditional statements, but you can mimic an instruction of the form r0 = (r1 >= r2) ? r3 : r4 with the sge instruction.
            all iterated values transferred out of the vertex shader are clamped to [0..1]
            several ways to optimize vertex shaders:
            when setting vertex shader constant data, try to set all data in one SetVertexShaderConstant call.
            pause and think about using a mov instruction, you may be able to avoid it.
            choose instructions that perform multiple operations over instructions that perform single operations.
            collapse(remove complex instructions like m4x4 or m3x3 instructions)vertex shaders before thinking about optimizations.
            a rule of thumb for load-balancing between the cpu/gpu: many calculations in shaders can be pulled outside and reformulated per-object instead of per-vertex and put into constant    registers. if you are doing some calculation which is per object rather than per vertex, then do it on the cpu and upload it on the vertex shader as a constant, rather than doing it on the GPU.
            one of the most interesting methods to optimize methods to optimize your applications bandwidth usage, is the usage of the compressed vertex data.
            Compiling a Vertex Shader
            Direct3D uses byte-codes, whereas OpenGL implementations parses a string. therefore the Direct3D developer needs to assemble the vertex shader source with an assembler.this might help you find bugs earlier in your development cycle and it also reduces load-time.
            three different ways to compile a vertex shader:
            write the vertex shader source into a separate ASCII file for example test.vsh and compile it with vertex shader assembler into a binary file, for example test.vso. this file will be opened and read at game start up. this way, not every person will be able to read and modify your vertex shader source.
            write the vertex shader source into a separate ASCII file or as a char string into you *.cpp file and compile it "on the fly" while the application starts up with the D3DXAssembleShader*() functions.
            write the vertex shader source in an effects file and open this effect file when the application starts up.the vertex shader can be compiled by reading the effect files with D3DXCreateEffectFromFile. it is also possible to pre-compile an effects file. this way, most of the handling of vertex shaders is simplified and handled by the effect file functions.
             
            5) Creating a vertex shader handle with CreateVertexShader.
            the CreateVertexShader function is used to create and validate a vertex shader.
            6) setting a vertex shader with SetVertexShader for a specific object.
            you set a vertex shader for a specific object by using SetVertexShader before the DrawPrimitive() call of this object.
            vertex shaders are executed with SetVertexShader as many times as there are vertices,.
            7) delete a vertex shader with DeleteVertexShader().
            when the game shuts down or when the device is changed, the resources taken by the vertex shader must be released. this must be done by calling DeleteVertexShader with the vertex shader handle.

            Point light source.
            a point light source has color and position within a scene, but no single direction. all light rays originate from one point and illuminate equally in all directions. the intensity of the rays will remain constant regardless of their distance from the point source unless a falloff value is explicitly stated. a point light is useful to simulate light bulb.

            to get a wider range of effects a decent attenuation equation is used:
            funcAttenuation = 1/A0 + A1 * dL + A2 * dL * dL

            posted @ 2008-12-09 11:18 jolley 閱讀(534) | 評論 (0)編輯 收藏

                 摘要: samplers: a windows into video memory with associated state defining things like filtering, and texture coordination addressing mode.   in DXSDK version 8.0 or earlier, the application can pass...  閱讀全文
            posted @ 2008-11-27 19:53 jolley 閱讀(1019) | 評論 (0)編輯 收藏

            Dsound提供了模擬的聲源對象和聽者(listener),聲源和聽者的關系可以通過三個變量來描述:在三維空間的位置,運動的速度,以及運動方向.
            產生音效的幾個條件是: 1)聲源不動,聽者移動,2)聲源走動,聽者移動, 3)聲源和聽者都移動.
            3d環境里面,通過IDirectSound3DBuffer8接口來表述聲源,這個接口只有在創建時設置DSBCAPS_CTRL3D標志的DirectSound buffer才支持這個接口,這個接口提供一些函數來設置和獲取聲源的一些屬性,可以通過主緩沖區來獲取IDirectSound3DListener8接口,通過這個接口,我們可以控制著聲學環境中的多數參數,比如多普勒變換的數量,音量衰減的比率.

            當聽者接近聲源的時候,聽到的聲音就越大,否則就越小,直到消失.
            聲源的最小距離,就是聲音的音量開始隨著距離大幅度衰減的起始點.
            DirectSound的缺省的距離DS3D_DEFAULTMINDISTANCE定義為1個單位,或者是1米,我們規定,聲音在1米處的音量是full volume,在2米衰減一半,4米衰減為1/4,一次類推.
            最大距離,就是聲源的音量不再衰減的距離,我們稱為聲源的最大距離.
            sound buffer處理模式: normal, head-relative, disabled.

            在正常模式下面,聲源的位置和方向是真實世界中的絕對值,這種模式適合聲源相對于聽者不動的情形.

             在head relative模式下,聲源的所有3d特性都跟聽者的當前的位置,速度,以及方向有關,當聽者移動,或者轉動方向,3d buffer就會自動重新調整world space.這種模式下可以實現一種在聽者頭部不停的嗡嗡叫的聲音,那些一直跟隨著聽者的聲音,根本沒必要用3d聲音來實現.

            在disable 模式下,3d聲音失效,所有的聲音聽起來好象來自聽者頭部的.
            主要注意的是兩個位置: 聲源位置, 聽者位置,之前我遇到的問題就是這個問題了,listenerPosition在登陸界面播放界面音效的時候,就記錄了,但是在進入游戲以后卻沒有將玩家的坐標重新賦給聽者的位置,并且根據玩家的狀態不停地更新.
            1)聲源位置確定,但是聽者位置不對,這樣就無法找到聲音的有效距離,這樣使3d音效看起來像是環境聲音一樣,走到哪里聽到哪里.
            2)從模型里面獲取的位置只是初始化的時候的位置,后來綁定到模型上面的位置卻沒有確定下來,這樣一來,造成了聲源位置和聽者位置都不對.

            后來在direct sound里面做了嘗試代碼:將listener和sound buffer都設置成相應的位置,然后在這個位置上面播放聲音,結果發現毫無距離感和衰減,directsound里面的play3DSound例子其實也并非真正意義上的positional sound,因為它只是用正余弦函數在改變聲源的位置.給的感覺只是聲音飛來飛去的,并沒有有什么衰減的成分在里面,另外在那邊做一些測試,將聲源位置固定,然后改變聽者的距離,我猜想這樣應該就有一種,靠近聲源位置的話,聲音就大,遠離聲源位置的話,聲音就小.事實上就跟播放普通音樂一樣.不過唯一值得贊揚的是:directsound里面有次緩沖區這個概念來支持聲音混合,以及可以播放音樂,真正意義上的3d音效還是不用directSound比較好.建議使用fmod,或者openal之類的,比較實在.

            DSBCAPS_CTRLPAN | DSBCAPS_CTRLVOLUME | DSBCAPS_CTRLFREQUENCY = DSBCAPS_CTRLDEFAULT.

            DirectSound不支持雙聲道混合(雙雙,或者單雙),即只支持單聲道混合,并且要求聲音的信息(比如頻率,采樣)一致. 建議采用8位采樣大小,以及22Khz的采樣頻率,相關轉換軟件用goldwave.
            posted @ 2008-10-31 10:10 jolley 閱讀(891) | 評論 (0)編輯 收藏

            Direct3D---- HAL----  Graphic Devices.
            REF device: reference Rasterizer.
            surface: a matrix of pixel that Direct3D uses primarily to store 2D image data.
            when we visualize the surface data as matrix, the pixel data is actually stored in a linear array.
            Direct3D ---- HAL ---- Graphics device
            REF: reference rasterizer, which emulates the wholly Direct3D in software.
            this allows u to write and test code that uses Direct3D features that are not available on your device.
            D3DDEVTYPE_REF,D3DDEVTYPE_HAL,
            the width and height of a surface is measured in pixel.
            IDirect3DSurface9 includes several methods:
            1)LockRect: allow to obtain a pointer to the surface memory.
            2)UnlockRect: after LockRecting it, then we should UnlockRect it.
            3)GetDesc:retrieve a description of the surface by filling out the a D3DSURFACE_DESC structure.

            Multisample:smooth out blocky-looking images that can result when representing images as a matrix of pixels.
            Multisample values: D3DMULTISAMPLE_NONE, D3DMULTISAMPLE_1_SAMPLE/D3DMULTISAMPLE_16_SAMPLE.

            we often need to specify the pixel format of Direct3D resources when we create a surface or texture.
            the format of a pixel is defined by specifying a member of the D3DFORMAT enmuerated type.
            D3DFMT_R8G8B8, D3DFMT_X8R8G8B8,D3DFMT_A8R8G8B8 are widely supported.
            D3DPOOL_DEFAULT: it instructs Direct3D to place the resource in the memory that is best suited for the resource type and its usage.
            it maybe video memory, AGP memory, or system memory.

            D3DPOOL_MANAGED:resources placed in the manage pool are managed by Direct3D(that is, they are moved to video or AGP memory)
            also a back-up copy of the resource is maintained in the system memory.
            when resources are accessed and changed by the application,
            they work with the system copy.
            then Direct3D automatically updates them to video memory as needed.

            D3DPOOL_SYSTEMMEM:specifies that the resource be placed in system memory.

            D3DPOOL_SCRATCH:specify that the resource be placed in system memory. the difference between this pool
            and D3DPOOL_SYSTEMMEM is that these resources must not follow the graphics device's restrictions.

            Direct3D maintains a collection of surfaces, usually two or three,
            called a swap chain that is represented by the IDirect3DSwapChain9 interface.

            swap chains and more specifically,
            the technique of page flipping are used to provide smooth animation betweeen frames
            Front Buffer: the contents of this buffer are currently beging displayed by the monitor.
            Back Buffer: the frame currently being processed its rendered to this buffer.

            the application's frame rate is often out of sync with the mointor's refresh rate,
            we do not want to update the contents of the front buffer with the next frame of animation
            until the monitor has finished drawing the current frame,
            but we donot want to halt our rendering while waiting for the monitor to
            finish displaying the contents of the front buffer either.

            we render to an off-screen surface(back buffer); then when the monitor is done displaying the surface in the back buffer, we move it to the end of the swaip chain and the next back buffer in the swap chain is promoted to be the front buffer.
            this process is called presenting .

            the depth buffer is a surface that does not contain image data but rather depth information about a particular pixel.
            there is an entry in the depth buffer that corresponds to each pixel in the final rendered image.

            In order for Direct3D to determine which pixels of an object are in front of another,
            it uses a technique called depth buffering or z-buffering.

            depth buffering works by computing a depth value for each pixel,and performing a depth test.
            the pixel with the depth value closest to the camera wins, and that pixel gets written.
            24-bit depth buffer is more accurate.

            software vertex processing is always supported and can always be used.
            hardware vertex processing can only be used if the graphics card supports vertex processing in hardware.

            in our application, we can check if a device supports a feature
            by checking the corresponding data member or bit in the D3DCAPS9 instance.

            initializing Direct3D:
            1) Acquire a pointer to an IDirect3D9 interface.
            2) check the device capabilities(D3DCAPS9) to see
            if the primary display adapter(primary graphics card)support hardware vertex processing, or transformation & Light.
            3) Initialize an instance of D3DPRESENT_PARAMETERS.
            4) Create the IDirect3DDevice9 object based on an initialized D3DPRESENT_PARAMETERS structure.

            1) Direct3DCreate9(D3D_SDK_VERSION), the D3D_SDK_VERSION can guarantees that the application is built against the correct header files.
            IDirect3D9 object is used for two things: device enumeration and creating the IDirect3DDevice9 object.
            device enumeration refers to finding out the capabilities, display modes,
            formats, and other information about each graphics device available on the system.

            2) check the D3DCAPS9 structure.
            use caps.DevCaps & D3DDEVCAPS_HWTRANSFORMANDLIGHT to check which type of vertex processing that the display card supports.

            3) Fill out the D3DPRESENT_PARAMETERS structure.

            4)Create the IDirect3DDevice9 interface.

            it works like this:
            we create a vertex list and an index list, the vertex list consists of all the unique vertices,
            and the index list contains values that index into the vertex list to
            define how they are to be put together to form triangles.

            the camera specifies what part of the world the viewer can see
            and thus what part of the world for which we need to generate 2d image.

            the volume of space is a frustum and defined by the field of view angles and the near and far planes.

            the projection window is the 2d area
            that the 3d geometry inside the frustum gets projected onto to create the 2D image representation of the 3D scene.

            local space, or modeling space, is the coordinate system in which we define an object's triangle list.

            objects in local space are transformed to world space through a process called the world transform,
            which usually consists of translations, rotations,
            and scaling operations that set the position, orientation, and size of the model in the world.
            D3DXMatrixTranslatoin.

            projection and other operations are difficult or less efficient.
            when the camera is at an arbitrary position and orientation in the world.
            to make things easier, we transform the camera to the origin of the world system and rotate it
            so that the camera is looking down the positive z-axis.

            all geometry in the world is transformed along with the camera
            so that the view of the world remains the same. this transformatoin is called view space transformation.
            D3DXMatrixLookAtLH.

            Direct3D takes advantage of this by culling(discard from further processing) the back facing polygons,
            this is called backface culling.

            by default: Direct3D treats triangles with vertices specified in a clockwise winding order(in view space)as front facing.
            triangles with vertices specified in counterclockwise winding orders(in view space) are considered back facing.
            Lighting sources are defined in world space but transformed into view space by the view space transformation.
            in view space these light sources are applied to light the objects in the scene to give a more realistic appearance.

            we need to cull the geometry that is outside the viewing volume, this process is called clipping.

            in view space we have the task of obtaining a 2d representation of the 3D scene.

            the process of going from n dimension to an n-1 dimension is called projection.

            there are many ways of performing a projection, but we are interested in a particular way called perspective projection.
            a perspective projection projects geometry in such a way that foreshortening occurs.
            this type of projection allows us to respresent a 3D scene on a 2D image.

            the projection transformation defines our viewing volume(frustum) and
            is responsible for projecting the geometry in the frustum onto the projection window.
            D3DXMatrixPerspectiveFovLH.

            viewport transform is responsible for transforming coordinates on the project window to a rectangle on the screen,
            which we call the viewport.

            a vertex buffer is simply a chunk of contiguous memory that contains vertex data(IDirect3DVertexBuffer9).
            a index buffer is a chunk of contiguous memory that contains index data(IDirect3DIndexBuffer9).

            Set the stream source.
            setting the stream source hooks up a vertex buffer to a stream that essentially feeds geometry into the rendering pipleline.

            once we have created a vertex buffer and, optionally, an index buffer,
            we are almost ready to render its contents, but there are three steps that must be taken first.
            1) Set the stream source.SetStreamSource.
            2) Set the vertex format. SetFVF.
            3) Set index buffer.SetIndices.

            D3DXCreateTeapot/D3DXCreateBox/D3DXCreateCylinder/D3DXCreateTorus/D3DXCreateSphere.

            D3DCOLOR_ARGB/D3DCOLOR_XRGB/D3DCOLORVALUE/
            #define D3DCOLOR_XRGB(r,g,b) D3DCOLOR_ARGB(0xff,r,g,b)
            typedef struct _D3DCOLORVALUE
            {
             float r;
                    float g;
                    float b;
                    float a;
            }D3DCOLORVALUE;
            0.0f < components < 1.0f.

            shading occurs during raserization and specifies
            how the vertex colors are used to compute the pixel colors that make up the primitive.

            with flat shading, the pixels of a primitive are uniformly colored by the color specified
            in the first vertex of the primitive.

            with gouraud shading, the colors at each vertex are interpolated linearly across the face of the primitive.

            the Direct3D lighting model, the light emitted by a light source consists of three components,or three kinds of light.
            ambient light:
            this kind of light models light that has reflected off other surfaces and is used to brighten up the overall scene.
            diffuse light:
            this type of light travels in a light direction. when it strikes a surface, it reflects equally in all directions.

            diffuse light reflects equally in all directions, the reflected light will reach the eye no matter the viewpoint,
            and therefore we do not need to take the viewer into consideration.thus,
            the diffuse lighting equation needs only to consider the light direction and the attitude of the surface.

            specular light: when it strikes a surface, it reflects harshly in one directions,
            causing a bright shine that can only be seen from some angles.

            since the light reflects in one direction,
            clearly the viewpoint,
            in addition to the light direction and surface attitude,
            must be taken into consideration in the specular lighting equation.
            used to model light that produces hightlights on such objects,
            the bright shines created when light strikes a polished surface.

            the material allows us to define the percentage at which light is reflected from the surface.

            a face normal is a vector that describes the direction a polygon is facing.

            Direct3D needs to know the vertex normals so that it can determine the angle at which light strikes a surface,
            and since lighting calculations are done per vertex,
            Direct3D needs to know the surface orientation per vertex.

            Direct3D supports three types of light sources:

            point lights: the light source has a position in world space and emits light in all directions.

            directional lights: the light source has no position but shoots parallel rays of light in the specified direction.

            spot lights: it has position and shines light through a conical shape in a particular direction.

            the cone is characterized by two angles, theta, and phi, theta describes an innder cone,and phi describes an outer cone.

            texture mapping is a technique that allows us to map image data onto triangles.

            D3DFVF_TEX1: our vertex structure contains one pair of texture coordinates.

            D3DXCreateTextureFromFile: to load texture from disk.it can load bmp,dds,dib, jpg, png and tga.

            SetTexture: set the current texture.
            Filtering is a technique that Direct3D uses to help smooth out these distortions.
            distortions include: MAGFILTER/MINFILTER.

            nearest point sampling:
            default filtering method, produces the worst-looking result, but the fastest to compute.
            D3DSAMP_MAGFILTER, D3DTEXF_POINT.
            D3DSAMP_MINFILTER, D3DTEXF_POINT
            linear filtering:
            produces fairly good results, and can be fast on today's hardware.
            D3DSAMP_MAGFILTER, D3DTEXF_LINEAR.
            D3DSAMP_MINFILTER, D3DTEXF_LINEAR.
            anisotropic filtering:
            provide the best result, but take the longest time to compute.
            D3DSAMP_MAGFILTER, D3DTEXF_ANISOTROPIC.
            D3DSAMP_MINFILTER, D3DTEXF_ANISOTROPIC.
            the anisotropic level should also be set, and the maximum level is 4.
            the idea behind mimaps is to take a texture and
            create a series of smaller lower resolution textures
            but customize the filtering for each of these levels so it perserves the detail that is important for us.

            the mipmap filter is used to control how Direct3D uses the mipmaps.

            D3DTEXF_NONE: Disable mipmapping
            D3DTEXF_POINT: Direct3D will choose the level that is closest in size to that triangle.
            D3DTEXF_LINEAR: Direct3D will choose two closest levels, filter each level with the min and mag filters,
            and linearly combine these two levels to form the final color values.

            mipmap chain is created automatically with the D3DXCreateTextureFromFile function if the device supports mipmapping.

            blending allows us to blend pixels that
            we are currently rasterizing with pixels
            that have been previously rasterized to the same location.

            in other words, we blend primitives over previously drawn primitives.

            the idea of combining the pixel values that are currently being computed(source pixel)
            with pixel values previously written(destination pixel) is called blending.

            u can enable blending by setting D3DRS_ALPHABLENDENABLE to be true.

            you can set the source blend factor and destination blend factor by setting D3DRS_SRCBLEND and D3DRS_DESTBLEND.

            the default values for the source blend factor and destination blend factor are D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA.

            the alpha component is mainly used to specify the level of transparency of a pixel.

            In order to make the alpha component describe the level of transparent of each pixel,
            we must set  D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA.

            we can obtain alpha info from a texture's alpha channel.

            the alpha channel is an extra set of bits reserved for each texel that stores a alpha component.

            when the texure is mapped to a primitive, the alpha component in the alpha channel are also mapped,
            and they become tha alpha components for the pixels of the texured primitive.

            dds file is an image format specifically designed for DirectX applications and textures.

            the stencil buffer is an 0ff-screen buffer that we can use to achieve special effects.
            the stencil buffer has the same resolution as the back buffer and deep buffer,
            so that the i-jth pixel in the stencil buffer corresponds with the i-jth pixel in the back buffer and deep buffer.

            use stencil buffer, we can set it like this.Device->SetRenderState(D3DRS_STENCILENABLE, true/false).
            we can clear the stencil buffer, use Device->Clear(0,0,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER|D3DCLEAR_STENCIL,0xff00000,1.0f,0);
            it means that we want to clear the stencil buffer as well the target(back buffer) and depth buffer.

            a stencil buffer can be created at the time that we create the depth buffer.
            when specifying the format of the depth buffer,we can specify the format of stencil buffer at the same time.
            in actuality, the stencil buffer and depth buffer share the same off-screen surface buffer.
            but a segment of memory in each pixel is designated to each particular buffer.

            we can use the stencil buffer to block rendering to certain areas of the back buffer.
            the decision to block a particular pixel from being written is decided by stencil test.
            the test is performed for every pixel.
            (ref & mask) ComparisonOperator (value & mask)
            ref: application-defined reference value.
            mask: application-defined mask value.
            value: the pixel in the stencil buffer that we want to test.
            if the test evaluates to be false, we block the pixel from being written to the back buffer.
            if a pixel isn't written to the back buffer, it isn't written to the depth buffer either.

            we can set the stencil reference value by Device->SetRenderState(D3DRS_STENCILREF,0x1122);
            we can set the stencil mask value by Device->SetRenderState(D3DRS_STENCILMASK,0x1215);
            the default is 0xffffffff, which doesn't mask any bits.

            we can not explicitly set the individual stencil values, but recall that we can clear the stencil buffer.
            in addition, we can use the stencil render state to control what's written to the stencil buffer.

            the comparison operation can be any member of the D3DCMPFUNC emumerated type.

            In addition to decide whether to write or block a particular pixel from being written to the back buffer.
            we can specify how the stencil buffer should be updated.
            Device->SetRenderState(D3DRS_STENCILFAIL,StencilOperation).
             
            we can set a writtten mask that will mask off bits of any value that we want to write in the stencil buffer.
            we set the state D3DRS_STENCILWRITEMASK.

            the stencil buffer allows us to block rendering to certain areas on the back buffer.

            we can use the stencil buffer to block the rendering of the reflected teapot if it is not being rendered into the mirror.

            parallel light shadow.
            r(t) = p +tL (1)
            n.p + d = 0  (2)
            the set of intersection points found by
            shooting r(t) through each of the object's vertices with the plane
            defines the geometry of the shadow.
            the equations:
            s = p + [(-d-n.p)/(n.L)]L
            L: define the position of the point light

            point light shadow.
            r(t) = P + t(P - L) (1)
            n.p + d = 0         (2)
            the set of intersection points found by
            shooting r(t) through each of the object's vertices with the plane
            define the geometry of the shadow.
            L: define the direction of the parallel light rays.

            shadow matrix can be gotten from D3DXMatrixShadow.
            using stencil buffer, we can prevent writing overlapping pixels and therefore avoid double blending artifacts.

            ID3DXFont is to draw the Direct3D application.

            we can create an ID3DXFont interface using the D3DXCreateFontIndirect function.
            also we can use D3DXCreateFont function to obtain a pointer to an ID3DXFont interface.

            the ID3DXFont and CFont samples for this chapter compute and display the frames rendered per second(fps).

            CD3DFont can be simple alternatives for fonts though it doesn't support enough complex formats and font types.
            to use CD3DFont class, we should include ,d3dfont header files/source files, d3dutil header files/source files,dxutil header files/source files.

            CD3DFont class can be constructed from its constructor functions.
            and it can use its member functions, such as DrawText.

            D3DXCreateText can also create text.

            ID3DXBaseMesh interface contains a vertex buffer that stores the vertices of the mesh and an index buffer
             that how these vertices are put together to form the triangles of the mesh.
            GetVertexBuffer,GetIndexBuffer.
            also there are these related functions:LockVertexBuffer/LockIndexBuffer, UnlockVertexBuffer/UnlockIndexBuffer.

            GetFVF/GetNumVertices/GetNumBytesPerVertex/GetNumFaces.

            a mesh consists of one or more subsets.
            a subset is a group of triangles in the mesh that can all be rendered using the same attribute.
            by attribute we mean material, texture, and render states.
            each triangle in the mesh is given an attribute ID that specifies the subset in which the triangle lives.

            the attribute IDs for the triangles are stored in the mesh's attribute buffer, which is a DWORD array.
            since each face has an entry in the attribute buffer,
            the mumber of elements in the attribute buffer is equal to the number of faces in the mesh.
            the entries in the attribute buffer and the triangles defined in the index buffer have a one-to-one correspondence.
            that is, entry i in the attribute buffer corresponds with triangle i in the index buffer.
            we can access attribute buffer by LockAttributeBuffering and UnlockAttributeBuffering.

            ID3DXMesh interface provides the DrawSubset(DWORD AttribId) method to
            draw the triangles of a particular subset specified by the AttribId argument.
            when we want to optimize the mesh, then we can use function OptimizeInplace.

            // get the adjacency of the non-optimized mesh.
            DWORD adjacencyInfo[Mesh->GetNumFaces() *3]'
            Mesh->GenerateAdjacency(0.0f,adjacencyInfo);

            // array to hold optimized adjacency info.
            DWORD optimizedAdjacencyInfo[Mesh->GetNumFaces() * 3];
            Mesh->OptimizeInplace(
             D3DXMESHOPT_ATTSORT|
             D3DXMESHOPT_COMPACT|
             D3DXMESHOPT_VERTEXCACHE,
             adjacencyInfo,
             optimizedAdjacencyInfo,
             0,
             0);

            a similar method is the Optimize method,
            which outputs an optimized version of the calling mesh object rather than actually optimizing the calling mesh object.

            when a mesh is optimized with the D3DXMESHOPT_ATTSORT flag,
            the geometry of the mesh is sorted by its attribute
            so that the geomoetry of a particular subset exists as a contiguous block in the vertex/index buffers.

            In addition to sorting the geometry,
            the D3DXMESHOPT_ATTRSORT optimization builds an attribute table.
            the attribute table is an array of D3DXATTRIBUTERANGE structures.

            Each entry in the attribute table corresponds to a subset of the mesh and
            specifies the block of memory in the vertex/index buffers,
            where the geometry for the subset resides.

            to access the attribute table of a mesh, we can use GetAttributeTable method.
            the method can return the number of attributes in the attribute table or
            it can fill an array of D3DXATTRIBUTERANGE structures with the attribute data.
            to get the number of elements in the attribute table, we pass in 0 for the first argument:
            DWORD numSubsets  = 0;
            Mesh->GetAttributeTable(0,&numSubsets);
            once we know the number of elements, we can fill a D3DXATTRIBUTERANGE array with the actual attribute table by writing:
            D3DXATTRIBUTERANGE* table = D3DXATTRIBUTERANGE[numSubsets];
            Mesh->GetAttributeTable(table,&numSubsets);
            we can also set the attribute table directly by SetAttributeTabling.

            the adjacency array is a DWORD array, where each entry contains an index identifying a triangle in the mesh.

            GenerateAdjacency can also output the adjacency info.
            DWORD adjacencyInfo[Mesh->GetNumFaces()*3];
            Mesh->GenerateAdjacency(0.001f,adjacencyInfo);

            sometimes we need to copy the data from one mesh to another.
            this is accomplished with the ID3DXBaseMesh::CloneMeshFVF method.
            this method allows the creation options and flexible vertex format of the destination mesh to be different from those of the source mesh.
            for example:
            ID3DXMesh* clone = 0;
            Mesh->CloneMeshFVF(
            Mesh->GetOptions(),
            D3DFVF_XYZ|D3DFVF_NORMAL,
            Device,
            &clone);

            we can also create an empty mesh using the D3DXCreateMeshFVF function.
            by empty mesh, we mean that we specify the number of faces and vertices that we want the mesh to be able to hold.
            then D3DXCreateMeshFVF allocated the appropriately sized vertex, index, and attribute buffers.once we have the mesh's buffers allocated, we mannually fill in the mesh's data contents,
            that is we must write the vertices, indices, and attributes to the vertex buffer, index buffer, and attribute buffer, respectively.

            alternatively, you can create an empty mesh with the D3DXCreateMesh function.

            ID3DXBuffer interface is a generic data structure that D3DX uses to store data in a contiguous block of memory.
            GetBufferPointer: return a pointer to the start of the data.
            GetBufferSize: return the size of the buffer in bytes.

            load a x file: D3DXLoadMeshFromX.

            D3DXComputeNormals generate the vertex normals for any mesh by using normal averaging.
            if adjacency information is provided, then duplicated vertices are disregarded.
            if adjacency info is not provided,
            then duplicated vertices have normals averaged from the faces that reference them.

            ID3DXPMesh allows us to simplify a mesh by applying a sequence of edge collapse transformations(ECT)
            each ECT removes one vertex and one or two faces.
            because each ECT is invertible(its inverse is called a vertex split),
            we can reverse the simplification process and restore the mesh to its exact original state.

            we would end up spending time rendering a high triangle count model when a simpler low triangle count model would suffice.
            we can create an ID3DXPMesh object using the following function:
            D3DXGeneratePMesh.

            the attribute weights are used to determine the chance that a vertex is removed during simplification.
            the higher a vertex weight, the less chance it has of being removed during simplification.

            one way that we can use progressive meshes is to adjust the LOD(level of details) of a mesh based on its distance from the camera.

            the vertex weight structure allows us to specify a weight for each possible component of a vertex.

            bounding boxes/spheres are often used to speed up visibility tests and collision tests, among other things.

            a more efficient approach would be to compute the bounding box/sphere of each mesh and then do one ray/box or ray/sphere intersection test per object.
            we can then say that the object is hit if the ray intersected its bounding volume.

            since the right,up,and look vectors define the camera's orientation in the world, we sometimes refer to all three as the orientation vectors. the orientation vectors must be orthonormal.
            a set of vectors is orthonormal if they are mutually perpendicular to each other and of unit length.

            an orthogonal matrix has the property that its inverse equals its transpose.

            each time this function is called, we recompute the up and right vectors with respect to the look vector to ensure that they are mutually orthogonal to each other.

            pitch, or rotate the up and look vectors around the camera's right vector.
            Yaw, or rotate the look and  right vectors round  the camera's up vector.
            Roll, or rotate the up and right vectors around the camera's look vector.

            walking means moving in the direction that we are looking(along the look vector).
            strafing is moving side to side from the direction we are looking, which is of course moving along the right vector.
            flying is moving along the up vector.

            the AIRCRAFT model allows us to move freely through space and gives us six degrees of freedom.
            however, in some games, such as a first-person shooter, people can't fly;

            a heightmap is an array where each element specifies the height of  a particular vertex in the terrain grid.
            one of the possible graphical representations of a heightmap is a grayscale map, where darker values reflect portions of the terrain with low altitude and whiter values refect portions of the terrain with higher altitudes.

            a particle is a very small object that is usually modeled as a point mathematically.
            programmers would use a billboard to display a particle, a billboard is a quad whose world matrix orients it so that it always faces the camera.
             
            Direct3D 8.0 introduced a special point primitive called a point sprite that is most applicable to particle system.
            point sprites can have textures mapped to them and can change size.we can describe a point sprite by a single point. this saves memory and processing time because we only have to store and process one vertex over the four needed to store a billboard(quad).
             we can add one field in the particle vertex structure to specify the size of the particle with the flag D3DFVF_PSIZE.

            the behavior of the point sprites is largely controlled through render states.

            the formula below is used to calculate the final size of a point sprite based on distance and these constants:
            FinalSize = ViewportHeight.Size.sqrt(1/(a + bd+cdd))
            final size : the final size of the point sprite after the distance calculations.
            viewportheight: the height of the viewport.
            size: corresponds to the value specifies by the D3DRS_POINTSIZE render state.
            A,B,C: correspond to the values specified by D3DRS_POINTSCALE_A,D3DRS_POINTSCALE_B,
            D3DRS_POINTSCALE_C.
            D: the distance of the point sprite in view space to the camera's position. since the camera is positioned at the origin in view space, this value is D = sqrt(xx+yy+zz), where (x,y,z) is the position of the point sprite in view space.

            the attributes of a particle are specific to the particular kind of particle system that we are modeling.
            the particle system is responsible for updating, displaying, killing, and creating particles.
            we use the D3DUSAGE_POINTS to specifies that the vertex buffer will hold point sprites when creating vertex buffer for the point sprite.
            we use D3DUSAGE_DYNAMIC when creating vertex buffer is because we need to update our particles every frame.

            therefore, once we compute the picking ray, we can iterate through each object in the scene and test if the ray intersects it. the object that the ray intersects is the object that was picked by the user.

            when using picking algorithm, we need to know the object that was picked, and its location in 3D space.

            screen to projection window transform:
            the first task is to transform the screen point to the projection window.
            the viewport transformation matrix is:
            [ width/2  0  0 0]
            [0 -height/2 0 0 ]
            [0 0 MaxZ - MinZ 0]
            [x+(width/2) y+(height/2) MinZ 1]
            transforming a point p = (px,py,pz) on the projection window by the viewport transformation yields the screen point s = (sx,sy):
            sx = px(width/2) +  x + width/2
            sy = -py(height/2) + y + height/2.
            recall that the z-coordinate after the viewport transformation is not stored as part of the 2D image but is stored in the depth buffer.

            assuming the X and Y memebers of the viewport are 0,and P be the projection matrix, and since entries P00 and P11 of a transformation matrix scale the x and y coordinates of a point, we get:
            px = (2x/viewportwidth - 1)(1/p00)
            py = (-2y/viewportheight + 1)(1/p11).
            pz = 1
            computing the picking ray
            recall that a ray can be represented by the parameteric equation p(t) = p0 + tu, where p0 is the origin of the ray describing its position and u is a vector describing its direction.
            transforming ray 
            in order to perform a ray-object intersection test, the ray and the objects must be in the same coodinate system. rather than transform all the objects into view space, it's often easier to transform the picking ray into world space or even an object's local space.
            D3DXVec3TransformCoord : transform points.
            D3DXVec3TransformNormal: transform vectors.
            for each object in the scene, iterate through its triangle list and test if the   ray intersects one of the triangles, if it does, it must have hit the object that the triangle belongs to.
            the picking ray may intersect multiple objects. however, the object closest to the camera is the object that was picked, since the closer object would have obscured the object behind it.
            HLSL.
            we write out shaders in notepad and save them as regular ascii text files. then we use the D3DXCompileShaderFromFile function to compile our shaders.
            the special colon syntax denotes a semantic, which is used to specify the usage of the variable. this is similar to the flexible vertex format(FVF) of a vertex structure.

            as with a c++ program, every HLSL program has an antry point,
            posted @ 2008-10-30 22:56 jolley 閱讀(1765) | 評論 (2)編輯 收藏

            bug01: 在創建窗口的時候的width/height跟初始化D3D的時候的后緩沖區width/height不一致,致使在CreateDevice的時候返回D3DERR_INVALIDCALL的錯誤報告.
            bug02:
            static LRESULT CALLBACK WindowProc(HWND window, UINT msg, WPARAM wParam, LPARAM lParam);  // 回調函數
            wnd.lpfnWndProc = WindowProc; 
            這里使用static的原因
            error C3867: 'WinWrapper::WindowProc': function call missing argument list; use '&WinWrapper::WindowProc' to create a pointer to member
            e:\dx beginner\d3dinit\d3dinit\winwrapper.cpp(46) : error C2440: '=' : cannot convert from 'LRESULT (__stdcall WinWrapper::* )(HWND,UINT,WPARAM,LPARAM)' to 'WNDPROC'
            bug03: D3DXCOLOR_XRGB(255.0f,0.0f,0.0f).就會出現這樣的錯誤
            error C2296: '&' : illegal, left operand has type 'float',而將浮點型轉換為整數型就可以通過了.
            bug04:

            Direct3D9: (ERROR) :Current vertex shader declaration doesn't match VB's FVF
            這里是因為創建的時候使用不同的fvf,我這里的出錯是因為我在工程里面用到了兩個頂點緩沖器,而在渲染之前的操作都基于緩沖器A,而在渲染的時候卻采用緩沖器B,這樣就出現了這樣的問題,并且兩個頂點緩沖器采用的FVF都是不同的.

            bug05:
            很多次的時候,都發現了基本圖元都有繪制成功的,但是就是顯示不出來,跟了很久,后來發現是相機的位置問題.
             // 設置相機坐標和相關信息
             D3DXMATRIX matCamera;
             D3DXVECTOR3 eye(-10.0f,3.0f,-15.0f);         // 相機坐標(eye)
             D3DXVECTOR3 lookAt(0.0f,0.0f,0.0f);      // 相機觀察的坐標位置(look at)
             D3DXVECTOR3 up(0.0f,1.0f,0.0f);          // 相機的向上變量
             D3DXMatrixLookAtLH(&matCamera,
              &eye,
              &lookAt,
              &up);
             pD3DDevice->SetTransform(D3DTS_VIEW,&matCamera);
            這個函數很重要,很多時候調整一下eye之后就好了.
            另外在光源的位置設置上面也存在同樣的問題,如果光源的direction和position沒有設置好的,就只能看到物體的背面,而重新調整之后就可以看到物體的原貌了.

            directInput.
            directInput里面采用了鉤子處理,這樣鉤子是直接作用windows消息的,這樣會帶來不必要的麻煩,而win32 api或者windows message就不要,直接用windows message或許還好些, 另外directInput也存在很多麻煩問題。
            1) 創建了多余的線程僅僅是用原始輸入從鍵盤讀取數據(實際上你可用win32自己讀?。?br>2) 不支持控制面板里面用戶設置的鍵盤重復率,
            3) 不支持大寫字母和shift后的字母,必須檢查大寫鍵是否開啟或者關閉以及常規字母。
            4) 不支持非英語國家的鍵映射。
            5) 不支持輸入法編輯器(比如漢語)。
            6) 不支持可訪問性鍵盤和其它,比如需要特殊驅動的聲音控制。

            在國外都不用這個directInput而轉用windows message或者win32 api,GetKeyBoardState,以及GetMouseState之類的。

            在使用directInput和windows消息上面應該采用這樣的方式來處理.

            今天遇到一個錯誤,他在釋放空間的時候出錯,錯誤提示為
            DAMAGE: after normal block(#78493) at  0x015EBADB.
            后來查詢出來了,是因為申請的空間太小,這樣的話,只要把申請的空間大小加大就好了.

            tweening:Short for in-betweening, the process of generating intermediate frames between two images to give the appearance that the first image evolves smoothly into the second image. Tweening is a key process in all types of animation, including computer animation. Sophisticated animation software enables you to identify specific objects in an image and define how they should move and change during the tweening process.

            posted @ 2008-09-19 01:15 jolley 閱讀(479) | 評論 (0)編輯 收藏

            索引緩沖器:制作索引次序,并且根據這個索引次序將所有繪制的圖行進行編碼,這樣有兩個好處:
            1)按照索引次序可以減少渲染頂點數目,這樣可以提高性能效率。
            2)索引緩沖器中的索引可以提供cache的功能,可以cache住最近用于變換和光照的頂點數據,下次使用的時候,就不用再處理了。并且在GPU里面是不知道abc, bcd里面的bc是一致的,這樣除非憑借索引。一般處理的方式是盡量減少數據量,可以通過改變繪制圖元的方式和頂點結構的組織格式(SetFVF,SetVertexDeclaration)。
            關于繪制圖元的方式(兩個三角形):
            1)如果是triangle list表示就是:
            abcbcd
            2)如果是triangle strip,就是
            abcd
            3)如果是index buffer + triangle list就是
            abcd
            012,123
            4)如果是 index buffer + triangle strip就是
            abcd
            0123

            頂點結構和索引值結構是根據繪制圖元的方式來決定的,比如選擇triangle strip的頂點設置跟triangle list的頂點設置就有不同.并且索引緩沖的設置是按照順時針(DX)的形式進行編號的.

            在使用圖形渲染頂點的時候,如果對模型的頂點布局和頂點渲染方式有不清楚的,這樣可以通過美工來獲得這種信息,之前在想一架飛機的時候,都沒有將模型構建起來,但是花了不少的精力在那上面,后面才想到,其實沒必要將心思全部花在那里的,而可以將心思放在具體的模型導入上面,在美術那邊就可以獲得頂點的詳細信息,這樣有助于解決問題的關鍵,頂點布局在美術那邊會得到一個比較完整的體現,而程序這邊只需要知道頂點的結構就好了,比如詳細的頂點結構,頂點在導入模型里面的布局等等,諸如此類的。另外還有一個點的是:其實研究頂點的話,可以將之放到模型里面去分析,這樣的話,可以減少單獨設計頂點帶來的困擾。

            攝像坐標系就是定義在攝像機的屏幕可視區域,在攝像機的坐標系中,x軸向右,z軸向前(朝向屏幕內,或者攝像機方向),y軸向上(不是世界的上方,而是攝像機的上方),

            為了簡化世界坐標系到物體坐標系的轉換,人們引入了一種新的坐標系,稱為慣性坐標系,意思是世界坐標系到物體坐標系的半途,慣性坐標系的原點跟物體坐標系的原點重合,但是慣性坐標系的軸平行于物體坐標系的軸??梢杂脩T性坐標系來做為物體坐標系與世界坐標系的中介:用旋轉能將物體坐標系轉換到慣性坐標系,用平移可以將慣性坐標系轉換到物體坐標系。
            物體坐標系轉換成世界坐標系要經歷的步驟:
            1)將物體坐標順時針45度,轉換到慣性坐標系。
            2)將慣性坐標系向下向右旋轉到世界坐標系。
            3)這樣物體坐標軸順時針45度,向下向下向右旋轉到世界坐標系。

            嵌套坐標系:定義了一種層次的,或者樹狀的坐標系.世界坐標系就是這個樹的根。

            對于許多向量,我們只關心它的方向而不關心她的大小,在這個情況下使用單位向量非常重要。(D3DXVec3Normalize)
            一般來說,點乘描述兩個向量的相似程度。點乘等于向量大小以及向量的cos值的積。
            幾何意義在于:a的長度與b在a上的投影長度的乘積,或者是b的長度與a在b上投影長的乘積,它是一個標量,而且可正可負,相互垂直的向量內積為0。

            叉積描述的是一個向量垂直于所在的平面的兩個向量(D3DXVec3Cross)。
            叉積最重要的應用在于創建垂直于平面,三角形,以及多邊形的向量。

            p,q,r定義為x,y,z軸上面的單位向量。那么任意一個向量都可以表示為v = xp + yq + zr; 并且將,p,q,r這些稱為基向量,這里基向量就是卡笛爾坐標。
            一個坐標系可以用任意三個基向量表示,當然這三個基向量要線形無關。

            矩陣的每一行都能解釋為轉換后的基向量。

            可以通過想象變換后的坐標系的基向量來想象矩陣。這些基向量在2d中構成“L”形狀,在3d中構架成三角架形狀。

            // 重新產生基向量
                D3DXVec3Normalize(&vLook, &vLook);    // 歸一化向量,獲得look方向
             D3DXVec3Cross(&vRight, &vUp, &vLook);  // 獲得up/look法線所在平面的垂直法線
                D3DXVec3Normalize(&vRight, &vRight);    // 歸一化向量,獲得right方向
             D3DXVec3Cross(&vUp, &vLook, &vRight); // 獲得right/look法線所在平面的垂直法線
                D3DXVec3Normalize(&vUp, &vUp);// 歸一化向量,獲得up方向
                 // Matrices for pitch, yaw and roll
            // 用歸一化后的向量和一個標量(角度)旋轉后獲得一個旋轉矩陣。
             D3DXMATRIX matPitch, matYaw, matRoll;
             D3DXMatrixRotationAxis(&matPitch, &vRight, fPitch );
             D3DXMatrixRotationAxis(&matYaw, &vUp, fYaw );   
              D3DXMatrixRotationAxis(&matRoll, &vLook, fRoll);
             
             // rotate the LOOK & RIGHT Vectors about the UP Vector
             // 用一個矩陣來變換一個3D向量.
             D3DXVec3TransformCoord(&vLook, &vLook, &matYaw);
             D3DXVec3TransformCoord(&vRight, &vRight, &matYaw);

             // rotate the LOOK & UP Vectors about the RIGHT Vector
             D3DXVec3TransformCoord(&vLook, &vLook, &matPitch);
             D3DXVec3TransformCoord(&vUp, &vUp, &matPitch);

             // rotate the RIGHT & UP Vectors about the LOOK Vector
             D3DXVec3TransformCoord(&vRight, &vRight, &matRoll);
             D3DXVec3TransformCoord(&vUp, &vUp, &matRoll); 

            D3DXVECTOR3 *WINAPI D3DXVec3TransformCoord(      

                D3DXVECTOR3 *pOut,
                CONST D3DXVECTOR3 *pV,
                CONST D3DXMATRIX *pM
            );
            其原理是pOut' = pV' * pM ,因為pM是4*4矩陣,這樣的話,pV' = [pV 1] ,并且之后求得出來的結果向量是pOut'去掉z軸(w)得到pOut = [pOut'.x/w pOut'y/w pOut'z/w].
            另外D3DXVec3TransformNormal的做法是差不多的,只是其中一項w被設置為0.

            // 設置照相機矩陣,位置和方向。
            static D3DXVECTOR3 vCameraLook=D3DXVECTOR3(0.0f,0.0f,1.0);
             static D3DXVECTOR3 vCameraUp=D3DXVECTOR3(0.0f,1.0f,0.0f);
             static D3DXVECTOR3 vCameraPos=D3DXVECTOR3(0.0f,0.0f,-5.0f);
             D3DXMATRIX view;

             D3DXMatrixLookAtLH (&view,&vCameraPos,  // pEye = Position
                   &vCameraLook,  // pAt
                   &vCameraUp);  // pUp 
             m_pd3dDevice->SetTransform(D3DTS_VIEW, &view);
            POSITION: 定義了物體的位置
            LOOK: 定義了物體所指的方向
            RIGHT: 定義了物體的右側指向
            UP:僅在物體會繞著LOOK向量旋轉時才是必需的,表示哪個方向對于物體來說是"上"或"下"。
            pitch - RIGHT
            roll - LOOK
            yaw - UP
            LOOK移動- 改變POSITION.

              m_pd3dDevice->SetTransform(D3DTS_WORLD, &m_pObjects[0].matLocal );

              設置渲染紋理內容。
              m_pd3dDevice->SetTexture( 0, m_pTexture );
              m_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
              m_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP,   D3DTOP_SELECTARG1 );

              // Passing an FVF to IDirect3DDevice9::SetFVF specifies a legacy FVF with stream 0.
            // 設置頂點格式
              m_pd3dDevice->SetFVF(FVF ); 
            // 將頂點緩沖器綁定到設備數據流。
              m_pd3dDevice->SetStreamSource( 0, m_pVB, 0, sizeof(VERTEX) );
            // 設置索引數據
              m_pd3dDevice->SetIndices( m_pIB ); 
             // 繪制
              m_pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST,
                       0,
                       0,
                       16,  // number of vertices
                       0,
                       10); // number of primitives

            在旋轉向量之前,必須重新歸一化,因為要使向量相互垂直。
            D3DXQuaternionRotationYawPitchRoll: 用給定的yaw, pitch, roll來構建四元數。
            D3DXMatrixRotationQuaternion: 用四元數來構建旋轉。

            Qx = [cos(yaw/2) (sin(yaw/2),0,0)]
            Qy = [cos(pitch/2) (0, sin(pitch/2),0)]
            Qz = [cos(roll/2) (0,0,sin(roll/2))]

            D3DXMatrixLookAtLH對于構建一個跟隨式照相機是很有幫助的。

            可以使用四元數的方式執行旋轉向量:
             fRoll = fPitch = fYaw = 0.0f;
                D3DXVECTOR3 vPos(0.0f, 0.0f, 0.0f);
                static D3DXMATRIX matView  = D3DXMATRIX(1.0f, 0.0f, 0.0f, 0.0f,
                     0.0f, 1.0f, 0.0f, 0.0f, 
                     0.0f, 0.0f, 1.0f, 0.0f, 
                     0.0f, 0.0f,-5.0f, 1.0f);
            // 更新位置和觀察矩陣
             D3DXMATRIX matR, matTemp;
            // 用yaw/pitch/roll來構建四元數。
             D3DXQuaternionRotationYawPitchRoll (&qR, fYaw, fPitch, fRoll); 
            // 用四元數來構建旋轉矩陣
             D3DXMatrixRotationQuaternion (&matR, &qR);
            //  應用旋轉矩陣     
             D3DXMatrixMultiply (&matView, &matR, &matView);
            // 平移矩陣
             D3DXMatrixTranslation (&matTemp, vPos.x, vPos.y, vPos.z);
            // 應用旋轉矩陣
             D3DXMatrixMultiply (&matView, &matTemp, &matView);
            // 為了繞任意點所做的旋轉為線性的。
             D3DXMatrixInverse (&matTemp, NULL, &matView);    

             m_pd3dDevice->SetTransform(D3DTS_VIEW, &matTemp );

            一個窗口應用程序中視口的大小被定義為該窗口客戶區的大小,而一個全屏應用程序中視口的大小則定義為屏幕的分辨率。
            視口用法: 通過GetViewPort可獲取視口數據,以視口的大小和提供給深度緩沖器的MinX,MinY來填充一個視口結構。應在DrawPrimitive*命令之前用SetViewPort來設置視口。在完成繪制后,應恢復原來的視口,以便在一次渲染中清理整個渲染目標,以及通過Direct3D框架提供的字體類來繪制文字。

            渲染場景最有效的方法是僅渲染那些可以被觀察者看到的像素。如果渲染了那些看不到像素,那么這種多余的操作稱為覆繪。
            深度緩沖器儲存著顯示器中每個像素的深度信息,在顯示虛擬世界之前,應當清除深度緩沖器中的每個像素,將它們設置為最遠可能的深度值。在光柵化時,深度緩沖算法會獲取當前多邊形所涉及的每個像素的深度。如果一個像素比先前存儲在深度緩沖器中的像素更接近于照相機,則較近的像素被顯示出來,并且這個新的深度值也將覆蓋深度緩沖器中原先的內容,每次繪制多邊形的像素時都執行一遍這個過程。

            顏色緩沖器存儲了稍后將要繪制到屏幕上的內容。深度緩沖器的每個像素通常都是16位或24位。深度精度取決于深度緩沖器的位數。
            W緩沖器:減少了Z緩沖器在處理遠距離物體時遇到的問題??梢酝ㄟ^這些獲得相關支持:
            m_pd3dDevice->SetRenderState(D3DRS_ZENABLE,D3DZB_USEW).
            以及
            if (d3dCaps.RasterCaps & D3DPRASTERPS_WBUFFER)來判斷是否支持W buffer.

            如何使用四元數來旋轉照相機?
            通過偏航,俯仰,以及橫滾來構建一個四元數,然后把它轉換成一個矩陣,最后求該矩陣的逆矩陣

            只有正方形的矩陣(方陣)才能求逆,因此當我們說矩陣求逆,那么它就是方矩陣
            并不是每個方陣都有逆矩陣。

            平面其實可以用法向量n和常數d來表示。判斷點和平面的關系:

                假如n·p + d = 0,那么點p與平面共面。

                假如n·p + d >0,那么點p平面的前面且在平面的正半空間里。

                假如n·p + d <0,那么點p平面的背面且在平面的負半空間里。

            創建平面的方法:
            1)通過點和法線來創建相關的平面,D3DXPlaneFromPointNormal。
            2)通過平面上面的三點,p0,p1,p2來表示,D3DXPlaneFromPoints。


            http://www.shnenglu.com/shadow/articles/2807.html
            http://www.shnenglu.com/lovedday/archive/2008/04/04/46264.html

            posted @ 2008-09-15 08:32 jolley 閱讀(592) | 評論 (0)編輯 收藏

            const 常量函數只能調用其中的常量相關的東西。
            struct StringLess:
             public std::binary_function<const std::string&,
                    const std::string&,
                    bool>
            {
             bool operator()(const std::string& a, const std::string& b)const
             {
              return strcmp(a.c_str(),b.c_str());
             }
            };

            std::map<std::string,Core::Rtti*,StringLess> nameTable;
             Core::Rtti* Factory::GetRttiName(std::string className)const
             {
              return this->nameTable[className];
             }
            但是還發現出現錯誤。
            g:\framework\foundation\foundation\core\factory.cpp(60) : error C2678: 二進制“[”: 沒有找到接受“const std::map<_Kty,_Ty,_Pr>”類型的左操作數的運算符(或沒有可接受的轉換)
                    with
                    [
                        _Kty=std::string,
                        _Ty=Core::Rtti *,
                        _Pr=StringLess
                    ]
                    e:\microsoft visual studio 8\vc\include\map(166): 可能是“Core::Rtti *&std::map<_Kty,_Ty,_Pr>::operator [](const std::basic_string<_Elem,_Traits,_Ax> &)”
                    with
                    [
                        _Kty=std::string,
                        _Ty=Core::Rtti *,
                        _Pr=StringLess,
                        _Elem=char,
                        _Traits=std::char_traits<char>,
                        _Ax=std::allocator<char>
                    ]
                    試圖匹配參數列表“(const std::map<_Kty,_Ty,_Pr>, std::string)”時
                    with
                    [
                        _Kty=std::string,
                        _Ty=Core::Rtti *,
                        _Pr=StringLess
                    ]
            這里主要是const函數的濫用,因為不清楚const函數究竟能對什么進行操作就濫用。
            map的const對象不可以調[]。
            operator[] 不是 const類型。
            所以這個錯誤基本上將const去掉就好了。
            這里總結一些東西,以前也濫用過const的東西
            返回const表示這個返回內容是只讀的,不能被修改的。
            參數使用const表示這個參數是只讀的,而不能進行任何修改,只參與計算,而不修改本身。
            const常函數,只能對常量進行操作,說明在這里主要是進行常量成員的操作,而不做任何與const無關的操作,上面就是個很好的例子。
            posted @ 2008-09-03 10:20 jolley 閱讀(1612) | 評論 (0)編輯 收藏

            doxygen是幫助改進工程結構和相關優化的工具,能夠提供工程類調用關系圖和函數調用關系圖。
            主要需要下面這些操作:
            1) doxygen, 2)Graphviz,圖形化可視軟件,3)iconv,中文編碼轉化工具。
            將這些安裝好以后,打開doxygen主界面選擇expert.進行相關配置,其中要配置的信息包括:1)project,主要是工程名稱和版本以及輸出目錄,這里關系到chm文檔第一頁顯示的標題。2)Build,主要是選擇顯示的模式,比如Extract_ALL:將顯示程序所有的元素:類,函數,變量;Extract_PRIVATE:顯示私有變量等。3)Message,warn_logfile項目里面可以給出出錯以及相關的編譯log,之后的編譯信息都將在這個對應的log文件里面找到。4)Input,主要是輸出你要生成軟件文檔的工程,這里要給出目錄。5)source browser,代碼瀏覽器,是否可以瀏覽到代碼。6)HTML,如果要產生CHM文檔的話,那么就一定得要選擇generate_htmlhelp,7)dot, 這里主要是要選擇CLASS_DIAGRAMS,UML_LOOK,CALL_GRAPH,CALLER_GRAPH.等顯示類結構圖,uml圖,被調用者關系圖,調用者關系圖。

            之后做一些選擇就可以生成doxyfile的東西,按照這個doxyfile的東西就可以生成相應的配置信息,doxyfile是doxygen的配置信息,是可以被編輯的。

            配置好之后,就可以生成相關的html文件,png文件等。

            之后再用chm打包成chm文件,方便查閱。
            這里可以借助其它的打包軟件來處理這些,因為html打包的容量有限,并且doxygen生成的html有時候也有問題,比如關聯的東西太多了,就會產生很多麻煩,比如html無法正常產生,我有次就產生以后,發現其中的html字節都為0KB,很郁悶,chm來打包也是很有問題的,打包東西太多了,就sb了,所以做到這點可以用doxygen+打包軟件(未必是chm文件).
            一些其他有用信息可以在這里獲得:
            http://www.fmddlmyy.cn/text21.html
            posted @ 2008-08-17 16:24 jolley 閱讀(244) | 評論 (0)編輯 收藏

            靜態lib在交叉工程中經常會出現因為lib不是最新的,帶來很多麻煩,之前遇到的一個問題是,在工程A里面修改了工程B里面的文件,但是B工程沒有編譯,這樣工程A里面用到工程B里面的lib是舊的,這樣會造成一種情況是,你在工程A里面調試程序的時候,發現明明是可以單步調試到某語句的,但是實際上卻執行到前面或者后面,這是因為這個程序語句沒有在lib里面更新的緣故.
            使用dll的好處:
            1)便于分工開發,2)便于后期維護和擴展,3)編譯生成的版本少,一般只有調試和發布版本的dll4)便于封裝代碼.相對而言,靜態lib每次都要編譯才生成,不像dll一樣可以進行二進制更新.并且dll更新版本以后不需要再次編譯,只要給出接口,更新實現就可以更新dll相關版本了.
            一般地在引擎中都強調多使用dll,將引擎各個模塊寫成dll的形式,這樣方便以后開發和后期維護,不過dll的一個難辦的地方是:除非你給出你要輸出的接口,否則你不能調用dll里面的任何東西,這樣就要在生成dll的時候,給出相關輸出函數接口,方便提供給client使用.
            posted @ 2008-07-30 20:45 jolley 閱讀(156) | 評論 (0)編輯 收藏

            將任務分配得足夠細,不能存在組員間存在任務重疊的情況
            用心去感動他們,跟他們多溝通,交流,將心比心.
            及時跟蹤進度,在沒有趕上進度的情況下,及時地做出判斷.
            確定優先級別,什么事情先做,什么事情后做,什么事情可以很快做好,什么事情做好,需要一段時間,什么事情現在可以做,什么事情現在可以不做,什么事情現在不做,將來可以做等,都要很清晰.
            在分配任務之前,先將問題說清楚,目標和要求必須很明確,并且根據個人的能力進行跟蹤,保證進度和相關質量.
            要求組員不能重復犯同樣的錯誤.

            思考問題不能太系統化,如果是這樣的話,一定要從多角度去想想辦法,從角度A和角度B以及角度C去思考問題D,這樣可能更加有效,現在規定一下,思考問題起碼要從三個角度上去思考,這樣才能將問題確定下來,否則是很容易犯考慮不全的錯誤,希望這樣的錯誤以后不要再犯,以前是按照多角度思考的,現在又染上舊毛病了,說明還沒有掌握這種能力,需要學習和加強.

            給出任務的時候,要步驟清晰,學習用步驟1,步驟2,步驟3等分塊列出來.而不能寫得太籠統.

            通盤把握,全局考慮.

            學會靈活變通和周旋,相信事情總有解決辦法,未必一定要認定要那么多,學會追求殊途同歸的效果,不能將自己的思維局限在某個部分而無法釋放出來。
            事情都會有個期限,過了那個期限,就要學會換種思路和角度去重新審視這個問題了。

            未完待續.
            posted @ 2008-07-14 22:14 jolley 閱讀(160) | 評論 (0)編輯 收藏

            僅列出標題
            共4頁: 1 2 3 4 
            久久久久女教师免费一区| 99久久精品费精品国产一区二区| 狠狠色综合网站久久久久久久| 国产精品免费久久久久电影网| 久久精品国产福利国产琪琪 | 无码精品久久久天天影视 | 色婷婷久久综合中文久久蜜桃av| 久久超乳爆乳中文字幕| 色综合久久综精品| 久久精品国产亚洲AV久| 国产精品九九久久免费视频 | 久久99精品久久久久婷婷| 日本一区精品久久久久影院| 久久无码高潮喷水| 国产精品久久久99| 久久精品aⅴ无码中文字字幕不卡| 久久久久国产视频电影| 欧美午夜精品久久久久免费视| 久久精品综合一区二区三区| 久久免费的精品国产V∧ | 久久久久久极精品久久久| 亚洲成色WWW久久网站| 日韩AV毛片精品久久久| 情人伊人久久综合亚洲| 久久99国产精品久久久| 久久久精品人妻一区二区三区蜜桃| 伊人热热久久原色播放www| 国产三级精品久久| 精品久久久久久国产| 国产精品9999久久久久| 久久久精品2019免费观看| 久久综合狠狠综合久久| 奇米影视7777久久精品人人爽| 日韩va亚洲va欧美va久久| 久久综合伊人77777麻豆| 久久精品夜色噜噜亚洲A∨ | 亚洲中文字幕无码久久2017| 久久天天躁夜夜躁狠狠| 日本强好片久久久久久AAA| 久久久久人妻一区二区三区vr| 久久精品人人做人人妻人人玩|