??xml version="1.0" encoding="utf-8" standalone="yes"?> 首先Qirrlicht的商业性是很浅的,如果要想应用于商业化Q不下一d夫是不行的?比v现在满天飞舞的UINTY3DQ就更不用说了。就和OGRE比,也因为IRRLICHT没有提供太多花哨的特性,而导致这么多q来ARPUg直没有OGRE高,玩家失率是巨大的?/p> Niko自己整的SupperCuber.更不用说了Q我自己下蝲来弄了一下,也没见得有多好Q反正没有官方介l得那么牛B?/p> 在重新定位自p深入挖掘的引擎之前,也曾再一ơ被OGRE吸引q?原因有很多种Q?/p> 一是天龙代码的泄漏Q里面有很多OGRE模型Q可以直接加载,构徏场景。很快速地获得像模像样的成感?/p> 二是QOGRE本n的DEMO提供了大量的SHADERQ不用自己再辛辛苦苦地去东拼西找了?/p> 三是Q本来与一个朋友相U用OGRE整RTS的, 因ؓ目前公司的项目是RTSQ所以一旉Q对RTS兴趣大增Q?看了0 A.D. GLEST{代码?也明白了RTS中,工具与AIQ远q大q于画面昄?所以,使用OGREQ有现成的OgreCrowd{可以用?不用再ؓ动态寻路找ȝ?/p> 四是QOGRE的招聘和成熟的案例远q大于irrlicht. 光是我知道的 天龙八部Q成吉思汗Q独孤求败,极光世界Q火炬之光等Q就_说明它的威力?/p> 五是QOGRE官方支持WIN8,ADDROID,IOS。?/p> 原因太多了,q是一讲irrlicht的文章,老是夸OGRE是不道d的?/p> 下面Q我来说说我的原因,也供和我一Ll的朋友作一个参考?/p> 一、我旉有限Q虽然之前看qOGRE的代码,但是对OGREq是不敢说有掌握Q?如果要用OGREQ其实还是得重头再来?/p> 二、IRRLICHT因ؓ东西不多Q所以以前在大学的时候,对其很熟悉了,回过头来上手Q也更容易?/p> 三、有一点点控制Ʋ在里面Q想看看IRRLICHTl过攚w后Q是不是真的比不上OGRE?/p> 四、蜀门的成功Q够说明一个游戏的画面Q不是全部,只要不媄响大局Q?有一个比较亮点的技术或者效果,可以留住玩家?我想Q蜀门里装备的流光效果,虽然是l过术_ֿ设计后的U理动画Q?但已l够体现高U装备和低装备的差距?玩家也能感受到自己高U装备的华丽Q?所以,我更喜欢蜀门的y?/p> 五、犯贱,是多的人喜Ƣ的东西Q越是不x?/p> 六、想慢慢q渡Q先使用IRRLICHTQ直到IRRLICHT不够用,改Q改完了Q就把IRR所有的东西删除了,名字换了?是自己的了?/p> 七、GAMELOFT的刺客信条用的是IRRLICHTQ所以,我觉得还是可以的?QPS:下蝲来玩了一下,在IPAD上,主角站立不动的时候,会来回抖动, 摄相机移动的时候,也会抖动Q很费解Q?N是Q点精度问题? 但UNITY3D和COCOS2D-X{是没有q问题的呀Q?/p> 其实Q列了很多条Q最后也发现QIRRLICHT除了要简单点以外Q是没有OGRE那么强大的?不过Q我q是选择了从Q毕竟精力有限?如果要把OGRE整套东西理解了,再逐步重写Q我估计会疯掉?毕竟大而全的东西,具体在用的时候,是要抛掉很多东西来换取效率的?/p> 既然到这个点了,不得不说点别的?/p> 11q的时候,公司研发的引擎在演示完DEMO后,叫停了?目q行了两q半Q最后只有一堆代码和演示E序?对公司来_其实是一定的损失?后来目l成员{战WEB?nbsp; 走到q一步,其实原来的成员只剩和我另一个搭档了?/p> 没想刎ͼq入了WEBQ就一M复返了?q且Q公司的WEB目q展也不是很利?nbsp; 都说毕业后两q是一个分水岭Q?当时正好毕业两年?于是军_换一个环境挑战?来C目前的公司,做RTS游戏服务器?nbsp; 面对未知的东西,貌似更能Ȁ发我的兴,如今马上又是一q了。渐渐地Q开始怀念引擎,怀念图形?可以说IRRLICHT目前是被我用来表达我对囑Ş的思念?虽然我图形方面的技术很陈旧Q老土Q?但ƈ不媄响我说我喜欢搞图形?/p> 文章到此吧。也不知道再要说些什么了?/p>
q是使用freetypeq行中文昄的效果?/p>
irrlicht׃是用位囑֭体的方式Q是很容易替换掉字体的?同时Q其本n也提供了Font接口替换的功能?/p>
具体做法和网上大多数人是一L?/p>
在做q个的时候,又引入了另一个话题, gameswf和kfont(KlayGE Font)
gameswf是一个开源的C++ FLASH PLAYER?/p>
gameloft以及很多Ud应用或者游戏都在用它?/p>
当然Q也包括ScaleForm. 因ؓScaleForm是商业的Q所以比gameswf更加完善Q虽然说Qgameswf是ScaleForm的原型?/p>
kfont一直是自己喜欢的一U字体解x案,其无比拉伸的能力非常讨h喜欢。加上现在又单独成库了,如果不用gameswf的话Q我x它整合进irrlicht中?两年前公司(先前的公司)的引擎就用上了这个,遗憾字体库不是我弄进ȝQ一直对kfont没有q距L触?/p>
|上下蝲下来的代?下面地址可供参考,q是我觉得众多文章中Q讲得比较细的一个?/p>
http://blog.csdn.net/lee353086/article/details/5260101
至于FreeType,大家d方下载来~译可以了?/p>
BLOOM开
BLOOM?/p>
在IRRLICHT中实现BLOOMQ和其它引擎中没有太多的不同?SHADERq是那个SHADER?/p>
关于BLOOM的算法,也就那样了,没有特别之处Q况且,我这BLOOM很暴?/p>
render scene to texture.
1/4 downsample 选择暴光像素
h_blur 7ơ采?和权重?
v_blur 7ơ采?和权重?/p>
compose 两图叠加
下面说说我在irrlicht中实现post processing的方案?/p>
在irrlicht中是没有屏幕寚w四边形节点的Q如果要Ҏ扩展Q就只能修改代码了。我是尽量保证自׃修改IRR一行代码, 除非是真正用时Q要Ҏ率进行优化。前现实现的GPU蒙皮Q水面,镜面{,都没有修改过一行代码, 因ؓ我不惛_q一旉求,而改动了那一堆?当我真的需要改动irrlicht才能辑ֈ目标的时候,表示irrlicht中我使用的部分,可以退休了?/p>
渲染场景的时候,我们通常在用addXXXXSceneNode的时候,都默认不传父节点。这样就是默认的场景根节炏V但是,当我们要做post process的时候,需要对场景中的物体q行昄的开和关Q?于是Q我们ؓ了很快速地控制Q?于是普通场景节点多加了一个父节点Q?而post processing作ؓ场景的兄弟节点, q样在渲染的时候,可以方便地q行相关控制了?/p>
大概是这L
RootSceneNode
PostProcessingNode SceneOjbectsNode
Obj1?Obj2?Obj3?/p>
程Q?/p>
关闭 PostProcessingNode Q?渲染 SceneOjbectsNode 下所有的物体到RT上?/p>
关闭 SceneOjbectsNodeQ?打开PostProcessingNodeQ?q行一pd的后期效果处理?/p>
在irrlicht中是没有提供屏幕寚w四边形绘制的Q?如果手工构徏Q就很麻烦?所以,我采用的是一U很常见的手法, 即通过UV坐标来计最最l的点坐标倹{?/p>
VS的输出,是规一化坐标系Q?即X,Y是处?Q-1Q?Q之间的Q?于是?我们只需?pos = (uv-0.5)*2; pos.y = –pos.y;可以了?/p>
最q一直在加班Q没旉整理Z码?有兴的朋友可以加下面的?/p>
Irrlicht Engine-China
254431855
Posted on May 19, 2011, 11:00 pm, by xp, under Programming.
Just got a new Android phone (a Samsung Vibrant) a month ago, so after flashing a new ROM and installing a bunch of applications, what would I want to do with the new phone? Well, I’d like to know if the phone is fast enough to play 3D games. From the hardware configuration point of view, it is better equipped than my desktop computer in the 1990s, and since my desktop computer at the time had no problem with 3D games, I would expect it to be fast enough to do the same.
At first, I was considering downloading a 3D game from the market, but 3D games for Android are still rare, then why don’t I just create a 3D demo game myself?
After looking around which 3D game engines are available for the Android platform, I just settled down with Irrlicht. This is an open source C++ graphic engine, and not really a game engine per se, but it should have enough features to create my demo 3D application. And I like to have realistic physics in my game, so what could be better than the Bullet Physics library? This is the best known open source physics library, also developed in C++. The two libraries together would be an interesting combination.
Although Irrlicht was developed for desktop computers, but luckily enough, someone has already ported Irrlicht to the Android platform, which requires a special device driver for the graphic engine. And guess what? Someone has also created a Bullet wrapper for the Irrlicht engine. All of them in C++, and open source. All we need to do now to pull all these codes together to build a shared library for Android.
In this part, I’ll just describe what needs to compile all the codes for Android. Since we will compile C/C++ codes, you’ll need to download the Android native development kit. Please refer to the documentation on how to install.
We create an Android project, and add a jni folder. Then we put all the C/C++ source codes under the jni folder. I created three sub-folders:
After, all we need to do is to create an Android.mk file, which is quite simple, really. You can read the makefile to see how it is structured. Basically, we just tell the Android NDK build tools that we want to build all the source codes for the Arm platform, and we want to link with the OpenGL ES library, to create a shared library called libirrlichtbullet.so. That’s about it.
However, there’s one minor thing to note though. Android does not really support C++ standard template library, but the irrBullet library made use of it. Therefore, in the jni folder, we need to add an Application.mk file, which contains the following line:
APP_STL := stlport_static
And that’s it. Now, you can run ndk-build to build the shared library. If you have a slow computer, it would take a while. If everything is alright, you should have a shared library in the folder libs/armeabi/. That shared library contains the Bullet Physics, Irrlicht and the irrBullet wrapper libraries. You can now create your 3D games for Android with it. In the next part, we will write a small demo program using this library.
You can download all the source codes and pre-built library here.
Posted on May 20, 2011, 11:30 am, by xp, under Programming.
In the last post, we have built the Bullet Physics, Irrlicht and irrBullet libraries together to create a shared library for the Android platform. In this post, we are going to create a small demo 3D game for Android, using the libraries that we have built earlier.
This demo is not really anything new, I am going to just convert an Irrlicht example to run on Android. In this simple game, we are going to stack up a bunch of crates, and then we will shoot a sphere or a cube, from a distance, to topple the crates. The Irrlicht engine will handle all the 3D graphics, and the Bullet Physics library will take care of rigid body collision detection and all realistic physical kinetics. For example, when we shoot a sphere from the distance, how the sphere follows a curve line when flying over the air, how far it will fly, where it is going to fall on to the ground, how it reacts when it hits the ground, how it reacts when it hits the crates, and how the crates will react when being hit, etc, all these will be taken care of by Bullet Physics, and Irrlicht will render the game world accordingly.
Since it is easier to create Android project in Eclipse, we are going to work with Eclipse here. You will need to following tools to work with:
I’m assuming you have all these tools installed and configured correctly. And I’m assuming you have basic knowledge on Android programming too, so I won’t get into the basic details here.
Let’s create a project called ca.renzhi.bullet, with an Activity called BulletActivity. The onCreate() method will look something like this:
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Lock screen to landscape mode.
this.setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
mGLView = new GLSurfaceView(getApplication());
renderer = new BulletRenderer(this);
mGLView.setRenderer(renderer);
DisplayMetrics displayMetrics = getResources().getDisplayMetrics();
width = displayMetrics.widthPixels;
height = displayMetrics.heightPixels;
setContentView(mGLView);
}
This just tells Android that we want an OpenGL surface view. We will create a Renderer class for this, something very simple like the following:
public class BulletRenderer implements Renderer
{
BulletActivity activity;
public BulletRenderer(BulletActivity activity)
{
this.activity = activity;
}
public void onDrawFrame(GL10 arg0)
{
activity.drawIteration();
}
public void onSurfaceChanged(GL10 gl, int width, int height)
{
activity.nativeResize(width, height);
}
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
activity.nativeInitGL(activity.width, activity.height);
}
}
The renderer class’s method will be invoked every time a frame needs to be rendered. There’s nothing special here. When the methods are invoked, we just invoke the native methods in the activity class, which will then call the C native functions through JNI. Since Irrlicht and Bullet are C++ libraries, we will have to write the main part of the game in C/C++. We keep very little logic in Java code.
When the surface is first created, the onSurfaceCreated() method is invoked, and here, we just call nativeInitGL(), which will initialize our game world in Irrlicht. This will initialize the device, the scene manager, create a physical world to manage the rigid bodies and their collision, create a ground floor, and put a stack of crates in the middle point. Then, we create a first-person-shooter (FPS) camera to look at the stack of crates. The player will look at this game world through the FPS camera.
I’m not going to describe the codes line by line, since you can download the codes to play with it. But when you create the Irrlicht device with the following line of code:
device = createDevice( video::EDT_OGLES1, dimension2d(gWindowWidth, gWindowHeight), 16, false, false, false, 0);
Make sure you select the correct version of OpenGL ES library on your Android device. I have version 1.x on mine. But if you have version 2.x, change to video::EDT_OGLES2 instead.
After the initialization, we would have a scene that looks like this:
When a frame is changed and needs to render, the onDrawFrame() method of the renderer is invoked. And here, we just call the nativeDrawIteration() function and will handle the game logic in C/C++ codes. The code looks like this:
void Java_ca_renzhi_bullet_BulletActivity_nativeDrawIteration(
JNIEnv* env,
jobject thiz,
jint direction,
jfloat markX, jfloat markY)
{
deltaTime = device->getTimer()->getTime() ?timeStamp;
timeStamp = device->getTimer()->getTime();
device->run();
// Step the simulation
world->stepSimulation(deltaTime * 0.001f, 120);
if ((direction != -1) || (markX != -1) || (markY != -1))
handleUserInput(direction, markX, markY);
driver->beginScene(true, true, SColor(0, 200, 200, 200));
smgr->drawAll();
guienv->drawAll();
driver->endScene();
}
As you can see, this is very standard Irrlicht game loop, the only thing added here is a function to handle user input.
User input includes moving left and right, forward and backward, shooting sphere and cube. Irrlicht engine depends on keyboard and mouse for user interaction, which are not available on Android devices. So, we will create a very basic kludge to allow users to move around and shoot. We will use touch and tap to handle user input. Users will move their finger left and right, on the left part of the screen, to move left and right in the game world. Users will move their finger up and down, on the right part of screen, to move forward and backward in the game world. And tap on the screen to shoot. Therefore, the movement direction is translated into a parameter, called direction, and passed to the native code to be handled. We also grab the X and Y coordinates of the shooting mark, and pass them as parameters to native codes as well.
That’s it. You can now build it, package into an apk, install it on your Android device, and play with it. When you shoot on the stack of crates, you would have a scene that looks like this:
The performance on my Samsung Vibrant is ok, I get about 56 or 57 FPS, which is quite smooth. But if there are too many objects to animate, especially after we have shot many spheres and cubes, we will have a screen that hangs and jumps a bit, or sometimes, it stops to react to user input for a fraction of second. In a real game, we might want to remove objects that have done their work, so that the number of objects to animate is significantly reduced to maintain an acceptable performance.
The other important thing that we want to improve is user interaction and control. The Irrlicht engine is developed for desktop computers, it relies mainly on keyboard and mouse for user interaction. These are not available on mobile devices. The current demo attempted to use touch and tap on screen as user control, but it does not work very well. In a next post, we will try to create virtual controls on screen (e.g. buttons, dials, etc), and we might want to take advantage of the sensor as well, which is a standard feature on mobile devices now.
You can download the source codes of the demo here.
Posted on May 23, 2011, 6:25 pm, by xp, under Programming.
In the last post, we have created a basic 3D demo application with Irrlicht, in which we put a stack of crates, and we topple them by shooting a cube or a sphere.
In this post, we will try to create an on-screen control so that you can move around with it, like using a hardware control. What we want to have is something like this, in the following screenshot:
What we have here is a control knob, sitting on a base. The knob is in the centre of the base. The on-screen control always stays on top of the game scene, and the position should stay still regardless of how you move the camera. However, users can press on the knob, and drag it left and right, and this, in turn, moves the camera left and right accordingly.
Obviously, you can implement movement direction along more than one axis too. And you can also have more than one on-screen control if you want, since it is a device with multi-touch screen. But that’s left to you as an exercise.
Placing an on-screen control is actually quite easy. All you have to do is to load a picture as texture, then draw it as 2D image on the screen, at the location you want to put the control. But since we want the on-screen control to be always on top, we have to draw the 2D image after the game scene (and all the scene objects) are drawn. If we draw the 2D image first, it will hidden by the game scene. There, in your loop, you would have something like this:
driver->beginScene(true, true, SColor(0, 200, 200, 200));
smgr->drawAll();
guienv->drawAll();
// Draw the on-screen control now
onScreenControl->draw(driver);
driver->endScene();
That’s the basic idea. Here, we have created an OnScreenControl class, which encloses two objects, one of VirtualControlBase class and the other, of VirtualControlKnob class. The draw() method of the OnScreenControl class looks like this:
void OnScreenControl::draw(IVideoDriver* driver)
{
base->draw(driver);
knob->draw(driver);
}
It just relegates the drawing works to its sub-objects, the control base and the control knob. Note that the knob has to be drawn after the base, otherwise, it will be hidden behind the base, instead of sitting on top of it. The draw() method of the base looks like:
void VirtualControlBase::draw(IVideoDriver* driver)
{
driver->draw2DImage(_texture,
position2d(pos_x, pos_y),
rect(0, 0, width, height),
0,
SColor(255,255,255,255),
true);
}
As you see, it just draws a 2D image with the texture, at the location specified. That’s it.
After putting the on-screen control in place, we have to handle user touch events on the screen. If users press on the knob (i.e. pressing within the square boundary of the knob image), and move the finger around, we update the position of the knob according to the movement direction. Here, we want to make sure that users can not drag the knob out of the control base boundary (or too far out of the boundary anyway), to make it look and behave like a real control. As users move the knob around, you want to update your camera’s position accordingly. And when the knob is released, you want to reset its position back to the centre of the control base.
That’s basically the idea. You can grab the source code here. Ugly codes, I warn you.
The major problem of programming Irrlicht on Android is the separation of Java codes and the C/C++ codes. If you want to limit your program to only Android 2.3 or later, you can probably write the whole program in C/C++, using the native activity class. That way, you don’t have to move back and forth between Java and C++. But if you want to run your program on older versions of Android, your Activity must be written in Java, and the main game logic written in C/C++. You then have to catch user interaction events in your Java code, pass them through JNI to your C/C++ code. There will be loss of information as you move back and forth, not to mention that there will be quite a bit of code duplication. You can certainly create a full wrapper for the Irrlicht and Bullet libraries, but that will be taxing your mobile device heavily, and will certainly have a negative impact on performance. And creating a full wrapper for these two libraries would be a heck of a job.
The other problem is that, Irrlicht is an engine developed for the desktop, where keyboard and mouse are the main input devices. The Irrlicht port to Android mainly concerns with a display driver for Android, but the port has not really gone deep into this area of user interaction. Therefore, as you write your Irrlicht-based Android program, you would have to hack together user input handling model, event model, etc. In my demo, I haven’t even touched that, I have just kludged together some primitive event handling codes. In order to have our program fit in those multi-touch based devices, we would have to dig into the Irrlicht scene node animator and event handling mechanisms, and work it out from there. For example, we will have to define our own scene node animator which would be based on touch events instead of keyboard and mouse events, and add it to the scene node that we want to animate. This is something that we are going to look into in our future posts.
最q在用irrlicht做一?D试衣间的项目,Zl项目增ȝ花样Q于是想实现一面镜子?/p>
我记得D3D龙书上有一个用模板缓冲区实现的例子。网上也有OPENGL实现的例子?但这一ơ,我想用irrlicht的RTT实现一面镜子效果?/p>
其实原理和水面反原理是一LQ?只是没有加扰动而已
W一步:渲染反射贴图
反射贴图的渲染,其实是摄相机通过镜面镜像卛_Qirrlicht中我找了半天Q没有发现镜像矩늚法Q倒是在网上搜C一个?很是不错?/p>
同时Q也阅了一下先前公司引擎项目的代码Q发现其实就是那个公式?有兴的朋友可以参看q里
http://www.cnblogs.com/glshader/archive/2010/11/02/1866971.html
通过q个镜面反射矩阵Q我们可以将摄相机镜像, 相当于是从镜子里向外看,渲染Z个世界?在渲染的时候,要记得设|裁剪面?在我的测试中我没有设|?/p>
W二步:重新渲染世界
重新渲染世界的时候,镜子需要一个特D的U理来进行反脓图。(镜像摄相机空间的投媄U理映射Q?q个贴图方式Q就是指忽略镜子的纹理坐标,而通过
镜像摄相机来计算出投影坐标,然后贴在镜子上。在我的试中,是用SHADER来实现的?为镜子做了一个特D的U理?/p>
下面Q我贴一下SHADERQ很单,如果实在不清楚的Q可以参考一些投q理相关的资料?/p>
点着色器代码 HLSL
float4x4 WorldViewProj;
float4x4 MirrorWorldViewProj;
struct VS_OUTPUT
{
float4 position :POSITION;
float3 uv: TEXCOORD0;
};
struct VS_INPUT
{
float4 position : POSITION;
float4 color : COLOR0;
float2 texCoord0 : TEXCOORD0;
};
VS_OUTPUT main(VS_INPUT input)
{
VS_OUTPUT output;
float4 pos = mul(input.position, WorldViewProj);
output.position = pos;
//计算反射U理的坐?
pos = mul(input.position,MirrorWorldViewProj);
output.uv.x = 0.5 * (pos.w + pos.x);
output.uv.y = 0.5 * (pos.w - pos.y);
output.uv.z = pos.w;
return output;
}
像素着色器代码 HLSL
sampler2D colorMap;
struct PS_OUTPUT
{
float4 color : COLOR0;
};
struct PS_INPUT
{
float4 position : POSITION;
float3 uv: TEXCOORD0;
};
PS_OUTPUT main( PS_INPUT input )
{
PS_OUTPUT output;
float2 uv = saturate(input.uv.xy / input.uv.z);
output.color = tex2D(colorMap,uv);
return output;
}
RTT相关的操作,irrlicht的RenderToTexture已经很明白了Q再此不在敷q?/p>
上图Q收?/p>
本来说是做镜子效果的Q结果手工计的镜面反射矩阵应用在irrlicht相机上的时候,始终无法出现效果Q只能去|上搜烦
在irrlicht official wiki上发Cq个扩展的WaterNodeQ下载下来,改了点BUGQ整合进了Terrain Demo里, 是上图的效果?/p>
在我的机器上,HLSL版本是没有问题的QGL版本貌似RTT有点问题?/p>
http://supertuxkart.sourceforge.net/
׃墙的原因Q需要各位搭梯子?/p>
上周末,在弄换装的时候,发现irrlicht引擎本n是不支持g蒙皮的,多少令h有些失望?心里׃直寻思着怎么扩展一下,它弄出来?/p>
值得说明的是STK对irrlicht引擎的用法是很简单的Q基本上可以说是裸用Qƈ未在irrlicht接口上做修改?而是对外q行了一些必要的扩展?/p>
当然QSTK也对外开放了一个irrlicht.dllQ说是修改了其中的BUG?但直接用irrlicht是可以的?/p>
废话不多_来说说如何不修改irrlicht一行代码,通过外部扩展来实现硬仉骼动d
首先Q能够我们不修改irrlicht代码的原因,是因为ISkinnedMesh提供了一个setHardwareSkinning接口Q默认ؓfalse.
虽然q个接口的说明是"(This feature is not implemented in irrlicht yet)”Q但q不代表Q设|与不设|无差别?
查看代码可以发现Q当你设|了q个为true以后Qirrlicht完全不你的动M?意思就是,要是你非要让我干我不q不了的事,那就只有您另请高明了?/p>
irrlichtqCPU计算都不会参与?q正好让我们有机可乘Q完全用GPU接管?/p>
而要让一个顶点参与骨D,那骨骼烦引则是少不了的。所以,我们需要想办法让顶Ҏ据能够将骨骼索引代入SHADER中?/p>
在STK中用了一Uy妙的ҎQ?是使用了顶点的颜色数据Q?虽然q样一来,点颜色q不了了?但在模型渲染Ӟ点颜色很少被用到的?也就是说Q顶炚w色在STK的动L型中Q被用作了骨骼烦引?/p>
初始化骨骼烦引的Ҏ很简单,用下面的代码遍历卛_?/p>
设:我们有一个骨骼动L型是 ISkinnedMesh* pSkinnedMesh = …
那么Q初始化代码如下
for(u32 i = 0;i < pSkinnedMesh ->getMeshBuffers().size();++i)
{
for(u32 g = 0;g < pSkinnedMesh ->getMeshBuffers()[i]->getVertexCount();++g)
{
pSkinnedMesh ->getMeshBuffers()[i]->getVertex(g)->Color = video::SColor(0,0,0,0);
}
}
//初始化完毕以后,是需要真正的索引赋gQ通过以下代码可以完成
const core::array<scene::ISkinnedMesh::SJoint*>& joints = pSkinnedMesh ->getAllJoints();
for(u32 i = 0;i < joints.size();++i)
{
const core::array<scene::ISkinnedMesh::SWeight>& weights = joints[i]->Weights;
for(u32 j = 0;j < weights.size();++j)
{
int buffId = weights[j].buffer_id;
int vertexId = pSkinedMesh->getAllJoints()[i]->Weights[j].vertex_id;
video::SColor* vColor = &pSkinedMesh->getMeshBuffers()[buffId]->getVertex(vertexId)->Color;
if(vColor->getRed() == 0)
vColor->setRed(i + 1);
else if(vColor->getGreen() == 0)
vColor->setGreen(i + 1);
else if(vColor->getBlue() == 0)
vColor->setBlue(i + 1);
else if(vColor->getAlpha() == 0)
vColor->setAlpha(i + 1);
}
}
//l过以上两个步骤Q顶Ҏ据改造完成?值得注意的是Q?在这里, 索引 0 是被认ؓ是无效的
然后Q我们来创徏一个SHADER作ؓ渲染?/p>
假设 我们这个pSkinnedMeshl定了到了一个IAnimatedSceneNode* node 上?/p>
那,我们个结点创Z个材?在创建材质前Q我们需要准备一个SHADER回调?SHADER回调像下面一样就可以了?/p>
class HWSkinCallBack:public video::IShaderConstantSetCallBack
{
scene::IAnimatedMeshSceneNode* m_pNode;
public:
HWSkinCallBack(scene::IAnimatedMeshSceneNode* node):m_pNode(node)
{
}
virtual void OnSetConstants(video::IMaterialRendererServices* services,
s32 userData)
{
scene::ISkinnedMesh* mesh = (scene::ISkinnedMesh*)m_pNode->getMesh();
f32 joints_data[55 * 16];
int copyIncrement = 0;
const core::array<scene::ISkinnedMesh::SJoint*> joints = mesh->getAllJoints();
for(u32 i = 0;i < joints.size();++i)
{
core::matrix4 joint_vertex_pull(core::matrix4::EM4CONST_NOTHING);
joint_vertex_pull.setbyproduct(joints[i]->GlobalAnimatedMatrix, joints[i]->GlobalInversedMatrix);
f32* pointer = joints_data + copyIncrement;
for(int i = 0;i < 16;++i)
*pointer++ = joint_vertex_pull[i];
copyIncrement += 16;
}
services->setVertexShaderConstant("JointTransform", joints_data, mesh->getAllJoints().size() * 16);
}
};
好了Q现在我们来创徏一个材?/p>
s32 hwskm = gpu->addHighLevelShaderMaterialFromFiles(
"../../skinning.vert","main",video::EVST_VS_2_0,
"","main",video::EPST_PS_2_0,&hwc,video::EMT_SOLID);
//用新创徏出来的材质赋值给q个l点
node->setMaterialType((video::E_MATERIAL_TYPE)hwskm );
//到此Q设|完毕?/p>
//最后,是skinning.vert本n的内容了?贴出来即可,没有太多技巧,是一个普通的蒙皮?/p>
// skinning.vert
#define MAX_JOINT_NUM 36
#define MAX_LIGHT_NUM 8
uniform mat4 JointTransform[MAX_JOINT_NUM];
void main()
{
int index;
vec4 ecPos;
vec3 normal;
vec3 light_dir;
float n_dot_l;
float dist;
mat4 ModelTransform = gl_ModelViewProjectionMatrix;
index = int(gl_Color.r * 255.99);
mat4 vertTran = JointTransform[index - 1];
index = int(gl_Color.g * 255.99);
if(index > 0)
vertTran += JointTransform[index - 1];
index = int(gl_Color.b * 255.99);
if(index > 0)
vertTran += JointTransform[index - 1];
index = int(gl_Color.a * 255.99);
if(index > 0)
vertTran += JointTransform[index - 1];
ecPos = gl_ModelViewMatrix * vertTran * gl_Vertex;
normal = normalize(gl_NormalMatrix * mat3(vertTran) * gl_Normal);
gl_FrontColor = vec4(0,0,0,0);
for(int i = 0;i < MAX_LIGHT_NUM;i++)
{
light_dir = vec3(gl_LightSource[i].position-ecPos);
n_dot_l = max(dot(normal, normalize(light_dir)), 0.0);
dist = length(light_dir);
n_dot_l *= 1.0 / (gl_LightSource[0].constantAttenuation + gl_LightSource[0].linearAttenuation * dist);
gl_FrontColor += gl_LightSource[i].diffuse * n_dot_l;
}
gl_FrontColor = clamp(gl_FrontColor,0.3,1.0);
ModelTransform *= vertTran;
gl_Position = ModelTransform * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
/*
// Reflections.
vec3 r = reflect( ecPos.xyz , normal );
float m = 2.0 * sqrt( r.x*r.x + r.y*r.y + (r.z+1.0)*(r.z+1.0) );
gl_TexCoord[1].s = r.x/m + 0.5;
gl_TexCoord[1].t = r.y/m + 0.5;
*/
}
//注:q是GLSL 2.0Q?在用IRR做测试的时候,要选GL驱动方式?/p>
q是上个囑Q不上图感觉没有真像?虽然囄不出来什么动?/p>
Z说明它真的在动,不得不上W二张?
在此Q十分感谢Super Tux Kart. 提供了一个学习和扩展irrlicht的榜?
那就从今天这个用irrlicht做天龙八部的模型换装说v吧?/p>
也不知道是ؓ什么,最q又捣鼓起了OGRE和irrlicht. q且QL用irrlicht实现一些OGRE中的东西?/p>
当然Q这不是商业目Q也没有商业目的Q纯属蛋D已?/p>
一切行动的由来Q都来自于vczh那天晚上的D动?/p>
记得有一天晚上在里聊天Q大伙就U赞各位菊苣是多么的厉害?/p>
最后vc发了一个自q桌面截图_让你们看看菊苣是如何l成的(q不是原话,和话的字眼有出入Q在此不惌M责QQ如果真有想看的Q去ȝ的聊天记录)
那天晚上Q我想了很久。想惌p从{做页总后,是如何虚渡光阴的?/p>
l于忍不住了Q翻开了自qUd盘Q看看自己曾l做q的东ѝ?0%是徏好工E就没理了?/p>
q才明白Q我花在思考上的时间远q大于了行动?于是Q我军_改变自己Q找回那个真的我?/p>
3D游戏是我的真爱, 真爱到就画面差一点,只要?DQ我也会很喜Ƣ?/p>
于是Q我觉得自己q是应该接着先前的\C厅R?什么服务器Q什?AS3. 都是云Q?不喜Ƣ就是不喜欢?/p>
U下又开始研Iirrlicht了?/p>
猛地一发现Q自己是多么的搞W, ?9q到11q_一直在做引擎开发, 也翻qirrlicht和ogre无数遍?却从来就没有写完q一个完整的DEMO?/p>
q功能测试用例都没有写过。突然觉得之前的一些设计似乎有些脱M实际。没有真正用过Q又怎知如何是好Q如何是坏呢Q?/p>
q一ơ是真的玩irrlicht了, 中间也纠l过是不是OGRE更适合?但在目前q个旉有限的空间下Q我更愿意玩irrlicht.yQ轻ѝ?当然Q意味着更多东西要自己实现?不过对于一个代码控来说Q也反而更自得其乐?正好可以在短路的时候,d考一下其它引擎,用来扩充irrlicht.
我要做的不是把irrlicht整得牛BQ而是惌己弄弄,加上Udq_的崛P我觉得irrlicht更加适合吧?据说gameloft也有使用Q仅是据_?/p>
可能很多兄弟会说我这讲的东西Q其实就是一坨屎了?不过Q我觉得再坏的评论,也表CZU关注?批评好过于无视啊~~~~
----------------------------------------------------------下面说说我遇上的U结------------------------------------------------
U结1:换装需要场景节炚w?/strong>
在irrlicht中,q没有提供普通引擎中的submesh或者bodypartq种东西Q用于直接支持换装?在irrlicht中,如果惌换装Q最直接的方法就是依赖于场景l点
比如,在我的示例中Q可以更换头发,帽子Q衣服,护腕Q靴子,面容?那就需?个场景节点,1个作为根节点Q用于控制整个角色的世界坐标Q^U,~放Q旋转等属性。另?个场景节点则分别l有各个部g的模?/p>
贴一下我的角色类的代码,行数不多
class CCharactor
{
IrrlichtDevice* m_pDevice;
IAnimatedMeshSceneNode* m_pBodyParts[eCBPT_Num];
ISceneNode* m_pRoot;
public:
CCharactor(IrrlichtDevice* pDevice)
:m_pDevice(pDevice)
{
memset(m_pBodyParts,0,sizeof(m_pBodyParts));
m_pRoot = pDevice->getSceneManager()->addEmptySceneNode(NULL,12345);
}
void changeBodyPart(ECharactorBodyPartType ePartType,stringw& meshPath,stringw& metrialPath)
{
ISceneManager* smgr = m_pDevice->getSceneManager();
IAnimatedMeshSceneNode* pBpNode = m_pBodyParts[ePartType];
IAnimatedMesh* pMesh = smgr->getMesh(meshPath.c_str());
if(pMesh==NULL)
return;
if(pBpNode==NULL)
{
pBpNode = smgr->addAnimatedMeshSceneNode(pMesh,m_pRoot);
m_pBodyParts[ePartType] = pBpNode;
}
else
{
pBpNode->setMesh(pMesh);
}
ITexture* pTexture = m_pDevice->getVideoDriver()->getTexture(metrialPath.c_str());
if(pTexture)
pBpNode->setMaterialTexture(0,pTexture);
}
};
//然后Q我用了一个结构体来构建部件信?/p>
struct SBodyPartInfo
{
stringw Desc;
ECharactorBodyPartType Type;
stringw MeshPath;
stringw MeterialPath;
};
U结2Q共享骨?/strong>
首先Qirrlicht 1.8中对OGRE模型的格式支持在代码中,最高只看到?.40版本的解析,更高的就会被无视?天龙八部的模型有几个?.30的,而用于换装和主角的,都是1.40的?可能是解析不全的原因Q导?.40的骨骼动L法正常播放?q个问题整了几个时Q没有解冻I明天l箋
其次Q多个模型共享骨骼只能通过场景节点的useAnimationFrom来完成,q且传入的是一个Mesh参数。这点让| 天龙八部的角色动作是分开了的Q不同的d动作是一个skeleton文g?惌实现׃nQ有炚w烦?/p>
U结3Q模型文件格?/strong>
irrlicht不像OGRE那样有一个强大且成熟的模型文件格?虽然提供?irr格式Q但仅是用于irrEdit的场景信息输出。先看一张图
q张图是irrlicht samples中的MeshViewer的提C框内容?上面列出了可以支持的模型文gcd?大家可以看看Q又有多模型格式是可以直接拿来攑ֈ目上用的呢Q?mdl和ms3d可以考虑Qdae的话Q我在开源游? A.D. 中见C用过?其它的话Q就完全不熟悉了?OGRE?.mesh支持也不完全?N真要自己整一个?
我能惛_的,是选一个插件完整和模型和动L式都比较好的作ؓ与美术工具交互的格式?自己再写一个工P转换成自q格式?/p>
U结4Q硬件蒙?/strong>
我以为像NIKO那样的技术狂Q怎么会放掉这一个特性?很高兴地在场景节点上发现了硬件蒙皮的函数接口。但一看注释,把我咽着了?/p>
//! (This feature is not implemented in irrlicht yet)
virtual bool setHardwareSkinning(bool on);
其它地方Q还没有LQ就先不发表aZ?l箋着q个很傻B,很天真的捣鼓之\?
上个图,U念一下我的irrlicht产物?/p>
布衣
换了w盔?/p>
换了帽子和靴?/p>
PS:头发没有U理Q所以是白的?/p>