• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            posts - 43,  comments - 64,  trackbacks - 0

            注:本文的代碼圖片資料選自NVIDIA CUDAProgramming Guide,原作者保留所有著作權(quán)。

              NVIDIA近日終于發(fā)布了CUDA,有可能作為下一代SDK10的一部分奉送給樂(lè)于發(fā)掘GPU計(jì)算能力的專業(yè)人員。感興趣的朋友可以去這里一探究竟,下載嘗鮮,提供了大量的范例。
              我們都知道,GPU的并行運(yùn)算性能是極為強(qiáng)悍的,如此豐富的計(jì)算資源如果浪費(fèi)著不用,就用來(lái)跑跑游戲是遠(yuǎn)遠(yuǎn)不行的。而傳統(tǒng)的圖形API又單單的只提供了圖形操作的功能,沒(méi)有提供類似于CPU那樣通用計(jì)算的接口,所以說(shuō)以往的方法都是很麻煩而且需要相當(dāng)?shù)慕?jīng)驗(yàn)的 —— 比如用HDR圖片作Cube Map的時(shí)候,如果使用的是Paul提供的那種類似于經(jīng)緯圖的紋理,就需要大量的反三角函數(shù)的計(jì)算,而用Vertex Shader作反三角又太浪費(fèi)時(shí)間,于是人們用1D紋理作線性插值查找表進(jìn)行快速計(jì)算(訪問(wèn)數(shù)據(jù)紋理)。這個(gè)例子也可以看作是最基本的GPU計(jì)算。


            CUDA的誕生
              使用傳統(tǒng)API進(jìn)行計(jì)算是個(gè)不可挽回的錯(cuò)誤,CUDA的出現(xiàn)將改變這一狀況。CUDA主要在驅(qū)動(dòng)程序方面和函數(shù)庫(kù)方面進(jìn)行了擴(kuò)充。在CUDA庫(kù)中提供了標(biāo)準(zhǔn)的FFT與BLAS庫(kù),一個(gè)為NVDIA GPU設(shè)計(jì)的C編譯器。CUDA的特色如下,引自NVIDIA的官方說(shuō)明:
                1、為并行計(jì)算設(shè)計(jì)的統(tǒng)一硬件軟件架構(gòu)。有可能在G80系列上得到發(fā)揮
                2、在GPU內(nèi)部實(shí)現(xiàn)數(shù)據(jù)緩存和多線程管理。這個(gè)強(qiáng),思路有些類似于XB360 PS3上的CPU編程
                3、在GPU上可以使用標(biāo)準(zhǔn)C語(yǔ)言進(jìn)行編寫(xiě)。
                4、標(biāo)準(zhǔn)離散FFT庫(kù)和BLAS基本線性代數(shù)計(jì)算庫(kù)。
                5、一套CUDA計(jì)算驅(qū)動(dòng)。
                6、提供從CPU到GPU的加速數(shù)據(jù)上傳性能。瓶頸就在于此
                7、CUDA驅(qū)動(dòng)可以和OpenGL DirectX驅(qū)動(dòng)交互操作。這強(qiáng),估計(jì)也可以直接操作渲染管線
                8、與SLI配合實(shí)現(xiàn)多硬件核心并行計(jì)算。
                9、同時(shí)支持Linux和Windows。這個(gè)就是噱頭了
              看過(guò)了宣傳,您可以看一下CUDA提供的Programming Guide和其他的文檔。NVIDIA我覺(jué)得有些類似圖形界的Microsoft,提供精良的裝備諸如SDK和開(kāi)發(fā)文檔等等,比ATi好多了。

            CUDA本質(zhì)
              CUDA的本質(zhì)是,NVIDIA為自家的GPU編寫(xiě)了一套編譯器NVCC極其相關(guān)的庫(kù)文件。CUDA的應(yīng)用程序擴(kuò)展名可以選擇是.cu,而不是.cpp等。NVCC是一個(gè)預(yù)處理器和編譯器的混合體。當(dāng)遇到CUDA代碼的時(shí)候,自動(dòng)編譯為GPU執(zhí)行的代碼,也就是生成調(diào)用CUDA Driver的代碼。如果碰到Host C++代碼,則調(diào)用平臺(tái)自己的C++編譯器進(jìn)行編譯,比如Visual Studio C++自己的Microsoft C++ Compiler。然后調(diào)用Linker把編譯好的模塊組合在一起,和CUDA庫(kù)與標(biāo)準(zhǔn)C\C++庫(kù)鏈接成為最終的CUDA Application。由此可見(jiàn),NVCC模仿了類似于GCC一樣的通用編譯器的工作原理(GCC編譯C\C++代碼本質(zhì)上就是調(diào)用cc和g++)。NVCC有著復(fù)雜的選項(xiàng),詳情參閱CUDA SDK中的NVCC相關(guān)文檔。

            CUDA編程概念
            Device
              CUDA API提供接口枚舉出系統(tǒng)中可以作為計(jì)算設(shè)備使用的硬件為計(jì)算進(jìn)行初始化等操作。類似于DX編程中的初始化COM接口。

            Texture
              線性內(nèi)存中的數(shù)據(jù)和數(shù)組中的數(shù)據(jù)都可以作為紋理使用。不過(guò)數(shù)組在緩存層面上更適合優(yōu)化。紋理的概念類似于傳統(tǒng)的圖像紋理,可以以8 16 32位的整數(shù)儲(chǔ)存,也可以用fp16格式進(jìn)行儲(chǔ)存。而且當(dāng)把數(shù)組轉(zhuǎn)換為紋理的時(shí)候,還有一些有點(diǎn),比如整數(shù)與fp16數(shù)字可以選擇統(tǒng)一的轉(zhuǎn)換到32bit浮點(diǎn)數(shù),還可以使用數(shù)組邊界這個(gè)特性,還可以進(jìn)行過(guò)濾操作。

            OpenGL/DirectX Interoperability
              OpenGL的幀緩沖與DirectX9的頂點(diǎn)緩沖可以被映射到CUDA可操作的地址空間中,讓CUDA讀寫(xiě)幀緩沖里面的數(shù)據(jù)。不過(guò)CUDA Context一次只能操作一個(gè)Direct3D設(shè)備。當(dāng)前CUDA還不支持對(duì)DX10進(jìn)行類似的操作,除了DX9頂點(diǎn)緩沖也不允許進(jìn)行映射,而且一次只能映射一次。(這個(gè)地方NVIDIA沒(méi)有說(shuō)清楚,我估計(jì)是指只有一個(gè)Mapping Slot)

            Thread Block
            A thread block is a batch of threads that can cooperate together by efficiently sharing data through some fast shared memory and synchronizing their execution to coordinate memory accesses.
              ThreadBlock由一系列線程組成,這些線程可以快速共享內(nèi)存,同步內(nèi)存訪問(wèn)。每個(gè)線程都有個(gè)ID,這個(gè)ID好像平面坐標(biāo)一般。線程組成Grid。示意圖如下:

            CudaThread.PNG


            Memory Model
            A thread that executes on the device has only access to the device’s DRAM and on-chip memory through the following memory spaces:
            ? Read-write per-thread registers,
            ? Read-write per-thread local memory,
            ? Read-write per-block shared memory,
            ? Read-write per-grid global memory,
            ? Read-only per-grid constant memory,
            ? Read-only per-grid texture memory.
            The global, constant, and texture memory spaces can be read from or written to by the host and are persistent across kernel calls by the same application.
              在CUDA中我們要接觸到的內(nèi)存主要有:寄存器,Local內(nèi)存,Shared內(nèi)存,Global內(nèi)存,Constant內(nèi)存,Texture內(nèi)存。 有些類似于C內(nèi)存的分配類型了。而且內(nèi)存可以分配為數(shù)組或者是普通線性內(nèi)存,CUDA提供API可以正確的進(jìn)行內(nèi)存拷貝等操作。

            cudamemory.PNG


              后面我們將談到如何優(yōu)化GPU內(nèi)存。從上面的資料我們可以看出,這里的Grid概念類似于Process,也就是為線程執(zhí)行分配資源的單元,而只有線程是真正計(jì)算的部分。Local Memory類似線程的棧。Texture Memory類似于堆內(nèi)存區(qū)。

            具體操作
              我以CUDA附帶的simpleCUBLAS作為例子。
            #include?<stdio.h>
            #include?
            <stdlib.h>
            #include?
            <string.h>

            /*?Includes,?cuda?*/
            #include?
            "cublas.h"

            /*?Matrix?size?*/
            #define?N??(275)

            /*?Host?implementation?of?a?simple?version?of?sgemm?*///使用CPU進(jìn)行Matrix乘法計(jì)算的算式
            static?void?simple_sgemm(int?n,?float?alpha,?const?float?*A,?const?float?*B,
            ?????????????????????????
            float?beta,?float?*C)
            {
            ????
            int?i;
            ????
            int?j;
            ????
            int?k;
            ????
            for?(i?=?0;?i?<?n;?++i)?{
            ????????
            for?(j?=?0;?j?<?n;?++j)?{
            ????????????
            float?prod?=?0;
            ????????????
            for?(k?=?0;?k?<?n;?++k)?{
            ????????????????prod?
            +=?A[k?*?n?+?i]?*?B[j?*?n?+?k];
            ????????????}

            ????????????C[j?
            *?n?+?i]?=?alpha?*?prod?+?beta?*?C[j?*?n?+?i];
            ????????}

            ????}

            }


            /*?Main?*/
            int?main(int?argc,?char**?argv)
            {????
            ????cublasStatus?status;
            ????
            float*?h_A;
            ????
            float*?h_B;
            ????
            float*?h_C;
            ????
            float*?h_C_ref;
            ????
            float*?d_A?=?0;
            ????
            float*?d_B?=?0;
            ????
            float*?d_C?=?0;
            ????
            float?alpha?=?1.0f;
            ????
            float?beta?=?0.0f;
            ????
            int?n2?=?N?*?N;
            ????
            int?i;
            ????
            float?error_norm;
            ????
            float?ref_norm;
            ????
            float?diff;

            ????
            /*?Initialize?CUBLAS?*///初始化CUBLAS庫(kù)
            ????status?
            =?cublasInit();
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?CUBLAS?initialization?error\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            /*?Allocate?host?memory?for?the?matrices?*///分配內(nèi)存,這3個(gè)是257*257的大矩陣
            ????h_A?
            =?(float*)malloc(n2?*?sizeof(h_A[0]));
            ????
            if?(h_A?==?0)?{
            ????????fprintf?(stderr,?
            "!!!!?host?memory?allocation?error?(A)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????h_B?
            =?(float*)malloc(n2?*?sizeof(h_B[0]));
            ????
            if?(h_B?==?0)?{
            ????????fprintf?(stderr,?
            "!!!!?host?memory?allocation?error?(B)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????h_C?
            =?(float*)malloc(n2?*?sizeof(h_C[0]));
            ????
            if?(h_C?==?0)?{
            ????????fprintf?(stderr,?
            "!!!!?host?memory?allocation?error?(C)\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            /*?Fill?the?matrices?with?test?data?*/
            ????
            for?(i?=?0;?i?<?n2;?i++)?{
            ????????h_A[i]?
            =?rand()?/?(float)RAND_MAX;
            ????????h_B[i]?
            =?rand()?/?(float)RAND_MAX;
            ????????h_C[i]?
            =?rand()?/?(float)RAND_MAX;
            ????}


            ????
            /*?Allocate?device?memory?for?the?matrices?*/ //在GPU設(shè)備上分配內(nèi)存
            ????status?
            =?cublasAlloc(n2,?sizeof(d_A[0]),?(void**)&d_A);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?memory?allocation?error?(A)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????status?
            =?cublasAlloc(n2,?sizeof(d_B[0]),?(void**)&d_B);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?memory?allocation?error?(B)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????status?
            =?cublasAlloc(n2,?sizeof(d_C[0]),?(void**)&d_C);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?memory?allocation?error?(C)\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            /*?Initialize?the?device?matrices?with?the?host?matrices?*/ //把HOST內(nèi)的矩陣上傳到GPU去
            ????status?
            =?cublasSetVector(n2,?sizeof(h_A[0]),?h_A,?1,?d_A,?1);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?access?error?(write?A)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????status?
            =?cublasSetVector(n2,?sizeof(h_B[0]),?h_B,?1,?d_B,?1);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?access?error?(write?B)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????status?
            =?cublasSetVector(n2,?sizeof(h_C[0]),?h_C,?1,?d_C,?1);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?access?error?(write?C)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????
            ????
            /*?Performs?operation?using?plain?C?code?*/ //使用CPU進(jìn)行矩陣乘法計(jì)算
            ????simple_sgemm(N,?alpha,?h_A,?h_B,?beta,?h_C);
            ????h_C_ref?
            =?h_C;

            ????
            /*?Clear?last?error?*/
            ????cublasGetError();

            ????
            /*?Performs?operation?using?cublas?*/ //Wow !使用GPU計(jì)算
            ????cublasSgemm(
            'n',?'n',?N,?N,?N,?alpha,?d_A,?N,?d_B,?N,?beta,?d_C,?N);
            ????status?
            =?cublasGetError();
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?kernel?execution?error.\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????
            ????
            /*?Allocate?host?memory?for?reading?back?the?result?from?device?memory?*/ //分配HOST內(nèi)存準(zhǔn)備存放結(jié)果
            ????h_C?
            =?(float*)malloc(n2?*?sizeof(h_C[0]));
            ????
            if?(h_C?==?0)?{
            ????????fprintf?(stderr,?
            "!!!!?host?memory?allocation?error?(C)\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            /*?Read?the?result?back?*/ //回讀
            ????status?
            =?cublasGetVector(n2,?sizeof(h_C[0]),?d_C,?1,?h_C,?1);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?device?access?error?(read?C)\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            /*?Check?result?against?reference?*/
            ????error_norm?
            =?0;
            ????ref_norm?
            =?0;
            ????
            for?(i?=?0;?i?<?n2;?++i)?{
            ????????diff?
            =?h_C_ref[i]?-?h_C[i];
            ????????error_norm?
            +=?diff?*?diff;
            ????????ref_norm?
            +=?h_C_ref[i]?*?h_C_ref[i];
            ????}

            ????error_norm?
            =?(float)sqrt((double)error_norm);
            ????ref_norm?
            =?(float)sqrt((double)ref_norm);
            ????
            if?(fabs(ref_norm)?<?1e-7)?{
            ????????fprintf?(stderr,?
            "!!!!?reference?norm?is?0\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????printf(?
            "Test?%s\n",?(error_norm?/?ref_norm?<?1e-6f)???"PASSED"?:?"FAILED");

            ????
            /*?Memory?clean?up?*/
            ????free(h_A);
            ????free(h_B);
            ????free(h_C);
            ????free(h_C_ref);
            ????status?
            =?cublasFree(d_A);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?memory?free?error?(A)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????status?
            =?cublasFree(d_B);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?memory?free?error?(B)\n");
            ????????
            return?EXIT_FAILURE;
            ????}

            ????status?
            =?cublasFree(d_C);
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?memory?free?error?(C)\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            /*?Shutdown?*///關(guān)閉CUBLAS卸載資源
            ????status?
            =?cublasShutdown();
            ????
            if?(status?!=?CUBLAS_STATUS_SUCCESS)?{
            ????????fprintf?(stderr,?
            "!!!!?shutdown?error?(A)\n");
            ????????
            return?EXIT_FAILURE;
            ????}


            ????
            if?(argc?<=?1?||?strcmp(argv[1],?"-noprompt"))?{
            ????????printf(
            "\nPress?ENTER?to?exit\n");
            ????????getchar();
            ????}

            ????
            return?EXIT_SUCCESS;
            }


              除了那些個(gè)容錯(cuò)的代碼,我們可以看出使用CUBLAS庫(kù)進(jìn)行計(jì)算還是非常簡(jiǎn)潔直觀的。詳細(xì)的資料請(qǐng)看CUDA SDK自帶的范例。

            我的展望
              ATi(AMD)坐不住的,應(yīng)該會(huì)積極開(kāi)發(fā)CPU與GPU融合的相關(guān)組建。
              瓶頸在CPU - GPU帶寬上,NV很有可能推出優(yōu)化過(guò)的nForce芯片組提供高帶寬。
              用ICE配上CUDA組成分布式的GPU計(jì)算平臺(tái)怎么樣?!大伙不妨?xí)诚霑诚搿?br />  下一代BOINC計(jì)算平臺(tái)內(nèi)的項(xiàng)目能夠提供基于GPU的計(jì)算客戶端。
            posted on 2007-02-24 14:42 周波 閱讀(4530) 評(píng)論(6)  編輯 收藏 引用 所屬分類: Cg藝術(shù)無(wú)庸技術(shù)奇思妙想

            FeedBack:
            # re: Pure GPU Computing Platform : NVIDIA CUDA Tutorial
            2007-02-24 22:58 | lai3d
            CUDA stands for Compute Unified Device Architecture and is a new hardware and software architecture for issuing and managing computations on the GPU as a data-parallel computing device without the need of mapping them to a graphics API. It is available for the GeForce 8800 Series and beyond.

            8800及以上才支持啊,看來(lái)暫時(shí)不用考慮在游戲引擎里應(yīng)用  回復(fù)  更多評(píng)論
              
            # re: Pure GPU Computing Platform : NVIDIA CUDA Tutorial
            2007-02-24 23:54 | Jedimaster
            不過(guò)我的6200A NV44A也可以運(yùn)行調(diào)試版本的DEMO...而且我沒(méi)有安裝推薦的驅(qū)動(dòng)程序。估計(jì)有向下兼容的考慮。畢竟架構(gòu)沒(méi)有太大的變化。  回復(fù)  更多評(píng)論
              
            # re: Pure GPU Computing Platform : NVIDIA CUDA Tutorial
            2007-02-26 11:25 | 空明流轉(zhuǎn)
            我用的是A卡。。。
            觀望中,準(zhǔn)備將目前手里的項(xiàng)目用CUDA和普通的分布式程序都實(shí)現(xiàn)一下。
            實(shí)際上CUDA主要是為Workstation提供更加強(qiáng)勁的計(jì)算能力,至于游戲嘛,按照目前的顯卡速度,對(duì)于新的游戲,主要還是用于應(yīng)付渲染了,想有足夠多余的資源參與常規(guī)運(yùn)算,還不太現(xiàn)實(shí)。  回復(fù)  更多評(píng)論
              
            # re: Pure GPU Computing Platform : NVIDIA CUDA Tutorial
            2007-02-26 19:46 | Jedimaster
            @空明流轉(zhuǎn)
            前幾天去了下德國(guó)BOINC論壇,把這個(gè)想法和德國(guó)人交流了一下,引用了一篇回復(fù)。

            Zitat:
            Zitat von Jedimaster Beitrag anzeigen
            NVIDIA has released the CUDA, a parrallel library use NVIDIA GPU.
            I also got interested in it but unfortunately it seems to work solely with GeForce 8 series.

            Zitat:
            if we can supply client program use GPU & CPU, maybe we can highly improve our speed.
            Sure we could but when some projects released their applications as open source some people started to recompile them optimized (making use of MMX, SSE, all kinds of technologies the original binaries still lack). This had mainly 2 effects:

            1. People started using "optimized" core-clients (manipulated to demand a multiple of credits since they are calculated by CPU time which has been decreased by optimizations). From my point of view these people just did not understand the credit system although demanding more credits for completing 2 WUs in the time of one unoptimized may seem reasonable.

            2. and more dramatic: Some projects noticed a large discrepancy in the returned results. I think it was Einstein@Home that first asked their users not to use optimized clients. That caused problems at validation when erroneous results should have been sorted out. I don't know what the accuracy of GPU-based calculations is.

            Zitat:
            Sorry for my poor German, entschuldigung.
            Shouldn't matter too much for most users here.


            Kurz nochmal auf Deutsch: Jedimaster schlägt vor, daß man die kürzlich von NVIDIA freigegebene Bibliothek CUDA für GPU-basierte Berechnungen von z.B. Fouriertransformationen etc. benutzen könnte um die Anwendungen drastisch zu optimieren. Ich habe daraufhin geantwortet, daß CUDA nur mit der GeForce 8-Reihe zusammenarbeitet und derartige Berechnungen wie einst die MMX/SSE-Optimierungen Einbrüche bei der Genauigkeit zur Folge haben könnten, die zu denselben Problemen wie schon bei Einstein im Validator-Prozess führen könnten. Darüberhinaus würden wieder mehr "optimierte" Core-Clients zum Einsatz kommen.

            德國(guó)人認(rèn)為把CUDA等等技術(shù)用于分布式計(jì)算,在數(shù)值的有效性上還欠妥,甚至連專門為CPU優(yōu)化的程序都不推薦實(shí)用。結(jié)合目前G8系列還沒(méi)有上市,而且相關(guān)的中間件還不夠成熟,可能開(kāi)發(fā)相應(yīng)的程序還不是很現(xiàn)實(shí)。

            為了應(yīng)付游戲還是有些浪費(fèi),這邊物理卡剛出來(lái),Havok就又來(lái)湊熱鬧了,SM2人們用的正順手,SM3早就到來(lái)了。還是把優(yōu)化工作做好先。  回復(fù)  更多評(píng)論
              
            # re: Pure GPU Computing Platform : NVIDIA CUDA Tutorial
            2007-08-15 20:02 | sdfsdfsdf
            would be nice to read your site in english..  回復(fù)  更多評(píng)論
              
            # re: Pure GPU Computing Platform : NVIDIA CUDA Tutorial
            2007-08-23 15:18 | Dimitris
            @Jedimaster
            As long as accuracy is concerned, cuda complies with the error margin specifications (except for doubles i believe). There is are alot of SDK examples on their site with reference execution comparison, only once it failed to pass but the accuracy threshold was pretty tight! I'm using it now on computational physics thesis.

            For starters i just ported a simple random number generator to the gpu, the results were identical to the digits i cared about (6th crucial). Mind though that i had to convert it to float!
            Double should be working on CUDA 1.0 i think, but they didnt work for me, it wouldnt assign the value at all when double.
            I didnt even bother parallelizing it, just a 1x1 block and It was WAY OVER a magnitude faster. I'm impressed. All that with a 8600GTS.  回復(fù)  更多評(píng)論
              
            <2007年2月>
            28293031123
            45678910
            11121314151617
            18192021222324
            25262728123
            45678910

            周波 87年出生 南京林業(yè)大學(xué)05421班242信箱 專業(yè)木材科學(xué)與工程工業(yè)裝備與過(guò)程自動(dòng)化 遷移到 jedimaster(dot)cnblogs(dot)com

            常用鏈接

            留言簿(4)

            隨筆分類

            隨筆檔案

            新聞檔案

            同學(xué)們Blog

            搜索

            •  

            積分與排名

            • 積分 - 54150
            • 排名 - 421

            最新評(píng)論

            閱讀排行榜

            亚洲乱亚洲乱淫久久| 日批日出水久久亚洲精品tv| 亚洲国产成人久久一区久久| 狠狠色丁香婷婷久久综合| 色妞色综合久久夜夜| 99久久国产热无码精品免费久久久久| 精品久久久久久久无码| 久久青青草原亚洲av无码| 精品久久久久久无码免费| 人妻无码中文久久久久专区 | 亚洲国产精久久久久久久| 亚洲精品美女久久久久99小说| 久久久久人妻一区二区三区vr | 中文字幕成人精品久久不卡| 久久精品中文字幕一区| 99久久精品日本一区二区免费| 中文国产成人精品久久亚洲精品AⅤ无码精品 | 亚洲伊人久久综合中文成人网| 色综合久久精品中文字幕首页| 亚洲人成精品久久久久| 久久精品免费网站网| 久久精品国产亚洲网站| 久久99精品久久久久婷婷| 久久婷婷是五月综合色狠狠| 国产精品xxxx国产喷水亚洲国产精品无码久久一区 | 久久99热这里只有精品国产| 久久综合久久伊人| 99久久婷婷免费国产综合精品| 狼狼综合久久久久综合网| 日韩人妻无码精品久久免费一| 午夜精品久久久久久影视riav| 色播久久人人爽人人爽人人片aV| 国产精品一区二区久久精品无码 | 欧美久久一级内射wwwwww.| 青青草原综合久久大伊人精品| 久久精品黄AA片一区二区三区| 亚洲精品tv久久久久久久久| 精品熟女少妇AV免费久久| 久久精品人人槡人妻人人玩AV | AV无码久久久久不卡网站下载| 久久久久99精品成人片欧美|