]]>re: World Of Warcraft Server Source Topichttp://www.shnenglu.com/Jedimaster/archive/2007/09/21/13373.html#32633娓歌崱鑰?/dc:creator>娓歌崱鑰?/author>Fri, 21 Sep 2007 09:19:00 GMThttp://www.shnenglu.com/Jedimaster/archive/2007/09/21/13373.html#32633
]]>re: World Of Warcraft Server Source Topichttp://www.shnenglu.com/Jedimaster/archive/2007/09/21/13373.html#32634娓歌崱鑰?/dc:creator>娓歌崱鑰?/author>Fri, 21 Sep 2007 09:19:00 GMThttp://www.shnenglu.com/Jedimaster/archive/2007/09/21/13373.html#32634
]]>re: Pure GPU Computing Platform : NVIDIA CUDA Tutorialhttp://www.shnenglu.com/Jedimaster/archive/2007/08/23/18939.html#30674DimitrisDimitrisThu, 23 Aug 2007 07:18:00 GMThttp://www.shnenglu.com/Jedimaster/archive/2007/08/23/18939.html#30674As long as accuracy is concerned, cuda complies with the error margin specifications (except for doubles i believe). There is are alot of SDK examples on their site with reference execution comparison, only once it failed to pass but the accuracy threshold was pretty tight! I'm using it now on computational physics thesis.
For starters i just ported a simple random number generator to the gpu, the results were identical to the digits i cared about (6th crucial). Mind though that i had to convert it to float!
Double should be working on CUDA 1.0 i think, but they didnt work for me, it wouldnt assign the value at all when double.
I didnt even bother parallelizing it, just a 1x1 block and It was WAY OVER a magnitude faster. I'm impressed. All that with a 8600GTS.
]]>re: Pure GPU Computing Platform : NVIDIA CUDA Tutorialhttp://www.shnenglu.com/Jedimaster/archive/2007/08/15/18939.html#30101sdfsdfsdfsdfsdfsdfWed, 15 Aug 2007 12:02:00 GMThttp://www.shnenglu.com/Jedimaster/archive/2007/08/15/18939.html#30101
]]>re: Pure GPU Computing Platform : NVIDIA CUDA Tutorialhttp://www.shnenglu.com/Jedimaster/archive/2007/02/26/18939.html#19005JedimasterJedimasterMon, 26 Feb 2007 11:46:00 GMThttp://www.shnenglu.com/Jedimaster/archive/2007/02/26/18939.html#19005鍓嶅嚑澶╁幓浜?jiǎn)涓嬪痉鍥紹OINC璁哄潧錛屾妸榪欎釜鎯蟲硶鍜屽痙鍥戒漢浜ゆ祦浜?jiǎn)涓涓嬶紝寮曠敤浜?jiǎn)涓綃囧洖澶嶃?br> Zitat: Zitat von Jedimaster Beitrag anzeigen NVIDIA has released the CUDA, a parrallel library use NVIDIA GPU. I also got interested in it but unfortunately it seems to work solely with GeForce 8 series.
Zitat: if we can supply client program use GPU & CPU, maybe we can highly improve our speed. Sure we could but when some projects released their applications as open source some people started to recompile them optimized (making use of MMX, SSE, all kinds of technologies the original binaries still lack). This had mainly 2 effects:
1. People started using "optimized" core-clients (manipulated to demand a multiple of credits since they are calculated by CPU time which has been decreased by optimizations). From my point of view these people just did not understand the credit system although demanding more credits for completing 2 WUs in the time of one unoptimized may seem reasonable.
2. and more dramatic: Some projects noticed a large discrepancy in the returned results. I think it was Einstein@Home that first asked their users not to use optimized clients. That caused problems at validation when erroneous results should have been sorted out. I don't know what the accuracy of GPU-based calculations is.
Zitat: Sorry for my poor German, entschuldigung. Shouldn't matter too much for most users here.
Kurz nochmal auf Deutsch: Jedimaster schlägt vor, daß man die kürzlich von NVIDIA freigegebene Bibliothek CUDA für GPU-basierte Berechnungen von z.B. Fouriertransformationen etc. benutzen könnte um die Anwendungen drastisch zu optimieren. Ich habe daraufhin geantwortet, daß CUDA nur mit der GeForce 8-Reihe zusammenarbeitet und derartige Berechnungen wie einst die MMX/SSE-Optimierungen Einbrüche bei der Genauigkeit zur Folge haben könnten, die zu denselben Problemen wie schon bei Einstein im Validator-Prozess führen könnten. Darüberhinaus würden wieder mehr "optimierte" Core-Clients zum Einsatz kommen.