Using the Virtualizer class would significantly slow down loading time, right?
Especially with big resolution textures...?
Sooo, I tried this with my live wallpaper:
gLView.setPreserveEGLContextOnPause(true);
And it seems to work fine I guess; even when not keeping textures in VM.
When the context changes, it seems that the GLSurfaceView and Renderer will get disposed and will get recreated.
Textures will be reuploaded so that's fine I guess.
Context changes aren't supposed to happen anyway I think, but it seems to happen in the Live Wallpaper picker for some reason when changing the orientation...
While context changes do not seem to happen, when applied in a certain launcher and then changing the orientation.
So now I basically only load textures when the Live wallpaper is launched (and on context changes).
This loading process is a one time thing only but it does seem to take up some time...
Also I noticed that when I'm adding textures to the texturemanager, the VM copy is still kept until the texture is actually used on a visible Object3D in a World.draw() call.
Is it somehow possible to upload textures to the GPU immediately when the texture is added to the texturemanager? (except the textures that are only being blit since I guess they don't need to be uploaded to the GPU...)
So I thought this would be possible by using a world.draw() call after an Object3D with a texture is added to the world.
But then it also needs to be visible in order to get the texture uploaded to the GPU; So is there some way to make all objects in my world 'visible'?
Sooner or later all my objects are visible anyway...
So how it now works:
[add Texture to texturemanager -> apply Texture to Object3D -> add Object3D to world -> world.draw() -> rendering starts -> after a few seconds there is a hiccup because the added Object3D has become visible -> uploading texture to GPU -> continue rendering]
And what I want:
[add Texture to TextureManager (as VM copy) -> apply texture to Object3D -> add Object3D to world -> upload texture to GPU -> rendering starts]
I think I could possibly achieve this my pointing the camera at the Object3D so it becomes visible:
[add Texture to TextureManager (as VM copy) -> apply texture to Object3D -> add Object3D to world -> make Camera face the Object3D -> upload texture to GPU -> rendering starts]
And since I guess I never actually really need a VM copy of these textures anyway (but only for the initial uploading), would it be possible to upload textures to the GPU immediately from a given InputStream (of a .png file)? (I suppose this would save VM memory...)
So what I originally wanted:
[add Texture to TextureManager (as VM copy) -> apply texture to Object3D -> add Object3D to world -> upload texture to GPU -> rendering starts]
What I would want now:
[apply Texture PNG-InputStream to Object3D (no VM copy of the Texture) -> add Object3D to world -> upload texture to GPU using PNG-InputSteam -> rendering starts] (or perhaps a bitmap byte-inputstream if that works better...)
Basically what I want to achieve here is that I don't want a Texture to take up VM memory if the VM copy of the Texture gets disposed after uploading anyway...