You are looking at the HTML representation of the XML format.
HTML is good for debugging, but is unsuitable for application use.
Specify the format parameter to change the output format.
To see the non HTML representation of the XML format, set format=xml.
See the complete documentation, or
API help for more information.
<allpages gapcontinue="Requirements" />
<page pageid="35" ns="0" title="Reducing high-poly models">
<rev contentformat="text/x-wiki" contentmodel="wikitext" xml:space="preserve">There are at least 3 ways for free polygon reduction:
You can use the Decimate modifier in Blender. It doesn't preserve texture coordinates but the results are ok.
You can get MeshLab here: [http://sourceforge.net/projects/meshlab/ Download MeshLab]
MeshLab is quite easy to use and very powerful. There's even a filter that lets you reduce high-poly models while preserving the texture coordinates (Filters->Remeshing, Simplification and Reconstruction->Quadric Edge Collapse Decimation (with textures). Results are excellent.<br>
However, this filter has an issue with some models. It fails with an error message about missing texture coordinates. In this case, there is a workaround:
*import the mesh
*export it as ply
*create a new project
*import the ply file
*run the Cleaning and Repairing->Merge Close Vertices filter on it
*run the reduce filter again as described above
This is another way to reduce high-poly models into low-poly for free. Results are ok, the tool is very easy to use.<br>
'''What you need:'''<br>
VIZup FREE EDITION, version 1.8 (NOTE: later versions require paid registration to export). To install in Windows Vista, click on Properties>Compatibility and check the "run this program in compatibility mode" box (select Windows 98).<br>
Python (NOTE: install Blender first to determine what version you need)<br>
OBJ Importer (Or whatever format your high-poly model is in)<br>
WRL Exporter (Here are a couple of different ones)<br>
[http://www.bitmanagement.de/download/BS_Exporter_Blender/cnt-index.php?lang=en&prod=BS&32Exporter&32Blender http://www.bitmanagement.de/download/BS_Exporter_Blender/cnt-index.php?lang=en&prod=BS Exporter Blender]<br>
'''Installing required items:'''<br>
1) Download and install Blender (pay attention to what version of Python it recommends you download)<br>
2) Download and install Python<br>
3) Download all the scrypts and copy them into the Blender scripts directory<br>
'''How to reduce a humanoid (process similar for other models):'''<br>
1) Start with just the head of your high-poly model. I usually remove the eyes and recreate them later after reducing the head's polys.<br>
2) Export the head from Blender as a .wrl (don't worry about the textures - you will have to redo the UVs later anyway)<br>
3) Load the head .wrl into VIZup, and reduce the polys to as low as you can get them while still being able to distinguish the nose, eyes, and ears (Usually can't get any lower than around 800-900 polys). Don't worry about a few gaps in the mesh - you can weld these shut later. Note: it is possible to reduce a reduced model to get even fewer polys if the first time isn't enough.<br>
4) Save the new low-poly head to .wrl format.<br>
5) Import the head .wrl into Blender.<br>
6) All the vertices will be disjointed and there will be gaps in the mesh. This is where the work comes in - you will need to meticulously weld together vertices that are close to eachother to fill in the gaps. Avoid the temptation to divide or add polys unless absolutely necessary - poly count adds up quickly. With practice, this usually takes about 6-8 hours to do for a head to make it look nice.<br>
7) Create the eyes if you removed them (may require adding a few extra polys around the eye sockets first).<br>
Repeat this process for the rest of the body. I usually remove the fingers and toes, since they disappear during reduction anyway, and recreate them later (as we did with the eyes). The reason I do the head and body seperately, is because the body can usually be reduced a lot further than the head (around 400 - 500 polys at the minimum).<br>
After combining the head and body and welding them together, you can UV and texture them, set up a skeletal heirarchy, and animate. The entire process takes several days, but there is a benefit to creating models this way - there is a more natural non-symetric feel to the model which is very difficult to produce from scratch (models designed from scratch are often "too perfect", which gives them an strangely uncanny appearance if you try to texture them with actual photographs). I find that low-poly models produced with this method are much easier to texture, and the imperfections are much more believable.<br>
<page pageid="79" ns="0" title="Reducing memory usage">
<rev contentformat="text/x-wiki" contentmodel="wikitext" xml:space="preserve">=== Reducing memory usage ===
Android devices are much more limited devices than the usual desktop PC is when it comes to memory. This page will explain some ways to reduce the memory usage of a jPCT-AE powered application.
==== Watch your content ====
Check your models and textures to make sure that they aren't overly complex. You might want to lower texture resolution of vertex/polygon count of your mesh to save memory. Keep in mind that for textures, the uncompressed size matters...not the size that the file has on disk. I.e. a 32bit 512*512 texture needs 512*512=262144*4=1048576 (1MB). A 256*256 texture of the same depth only uses one quarter of that. In addition, the texture is stored twice by default, one copy remains in main memory while one will be uploaded to the GPU (you can disable this behaviour at the cost of not being able to recover from a pause or stop event then without reloading everything from scratch).
==== Reduce texture memory usage ====
As said, smaller textures need less memory. If this still doesn't help, there are ways to reduce memory usage somewhat further.
===== Use 16bit textures instead of 32bit ones =====
It might be sufficient to use 16bit textures (at least for some textures) instead of 32bit ones. These textures use only half the memory on the GPU and might also help to increase performance. To make a single texture use 16bit (i.e. 4 bits per pixel), so can use [http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Texture.html#enable4bpp(boolean)]. If you want all textures to be handled that way by default, [http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Texture.html#defaultToMipmapping(boolean)] is your friend.
Keep in mind that using this option increases color banding in the textures and it doesn't look good on all textures.
===== Compress the in-memory copy of the texture data =====
By default, jPCT-AE keeps a copy of the pixel data of a texture in main memory to handle pause/stop events, which destroy the gl context, i.e. all textures uploaded to the GPU. You can trade memory usage for upload performance by compressing this in-memory copy. Simple call [http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Texture.html#compress()] to enable this feature. Please note that this causes an little memory peak when uploading the compressed texture, because it has to be decompressed first.
===== Avoid mip-map generation on textures that don't need it =====
You should use mip-mapping ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Texture.html#setMipmap(boolean)]) to improve image quality and avoid texture flickering. However, some textures just don't need it. For example uni-colored textures or textures used for blitting only. You can safely disable mip-mapping for these textures to save some memory.
===== Use the Virtualizer =====
You can use the Virtualizer to swap in-memory texture data out to the SD-Card. To do this, set your Virtualizer instance in the TextureManager and call TextureManager.virtualize(<Texture>); for textures that you want to virtualize. A virtualized texture will consume the same amount of memory on the GPU but no main memory. Keep in mind that this will save memory but increase startup-time.
===== Use texture ETC1-compression =====
You can enable ETC1 texture compression on hardware that supports it (almost all current devices do). If it's not supported, jPCT-AE will revert to uncompressed textures instead. Memory saving effect of this is on par with using 16bit textures, but the quality might be better in some cases.
The compression happens at runtime. If you have a Virtualizer instance assigned to the TextureManager, you can enable caching for compressed textures in Config by setting cacheCompressedTextures to true.
==== Reduce animation size ====
Keyframe animations are fast and simple to use, but require a lot of memory. However, there are some ways to reduce this.
===== Reduce the number of keyframes =====
Your animation might look good enough with less keyframes. Just give it a try.
===== Remove animation sequences =====
A MD2-file usually contains a large set of animation sequences that you might not need in your application. You can remove them by calling [http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Animation.html#remove(int)].
===== Compress meshes =====
If you create your keyframe animation out of individual meshes (and not by loading a MD2 file), it can help a little to compress the mesh. You can either do this by letting cloneMesh(true) ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Mesh.html#cloneMesh(boolean)]) do this for you or by calling compress() ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Mesh.html#compress()]) on the mesh yourself.
===== Strip meshes =====
You can safely remove triangle information from a mesh if it's used for animations only. If it isn't, you should make a clone of it anyway, so actually, this applies in every case. A call to strip() ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Mesh.html#strip()]) does this for you per mesh. If you want to strip the whole animation, you can do this by calling strip on the animation itself ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Animation.html#strip()])
==== Reduce memory usage of objects ====
===== Strip objects =====
If you don't want to modify an object at runtime, you can strip it to save some memory by calling strip() ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Object3D.html#strip()]). However, if you want to clone this object later at runtime, you might run into trouble...
===== Make objects share meshes in main memory =====
If an object is used multiple times at once in an application (like bullets, multiple enemies...), you shouldn't load them multiple times but create clones of existing objects. This can be done by either calling cloneObject ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Object3D.html#cloneObject()]) or by using an appropriate constructor of Object3D.
===== Make objects share meshes on the GPU =====
In addition to sharing meshes in the vm's memory, you can also enable this for the data on the GPU. shareCompiledData() ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Object3D.html#shareCompiledData(com.threed.jpct.Object3D)]) is your friend here. This can also be used to make one animation call animate dozens of instances.
===== Use indexed geometry =====
By default, jPCT decides which objects should be compiled to indexed geometry and which to flat. However, flat consumes a little more memory, so if you are using a lot of small objects that jPCT compiles to flat, you can try to call forceGeometryIndices(true) ([http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Object3D.html#forceGeometryIndices(boolean)]) to force them into indexed mode.
===== Use the Virtualizer =====
By default, jPCT-AE keeps a copy of the compiled data that will be send to the GPU to handle context changes properly. You can offload this data to the SD-card by assigning a Virtualizer instance to an Object3D.