Ok, I'll give you a quick rundown of how it all works just to set some grounding for what I'm doing/attempting:
The voxel world is generated out of 20 .png images with ARGB pixels, the images are loaded in into 2 textures.
The first is a texture with GLFiltering on, so the texels are blurred. The second is the exact same image load but without GLFiltering.
(I've been storing the texture IDs for these textures in an int[] array instead of having to use the string references each time I want to change things.)
The textures are put onto plains with transparency turned on. This obviously means that the see through parts on the textures enable the user to see other layers through a higher up one.
The plains are separated in the Y dimension so that the textures appear to create voxels and occupy 3D space purely by the fact there are 20 1024x1024 layers.
Where the engine so far implies I'm using 1 plain per image, equating to 20 layers and 20 plains, this isn't quite accurate.
This leads to a very flat and blocky feel. To make the voxels seem more volumetric, I have another number of intermediate layers, these still use the texture of the overall layer, but are placed inbetween those main layers. (This number is not determined yet; on my laptop, I use from about 2 to 5 intermediate layers, and on my desktop I use between 8 and 20)
All of this combined means that the number of plains being rendered with an ARGB texture can range between 40 and 400 depending on what I'm doing and where I am. (This means the framerate, even on a good computer can be as low as 60 or 70.)
The other thing I'm doing is with the 2nd texture that is loaded, with the GLFiltering on texels. This is simply used as the bottom of the stack of duplicate layers to add shadowing, so that needs to be regenerated alongside the normal unfiltered texture used in the majority of the layers.
(A side point is that the top most layer also has normal mapping on it but that seems to be working fine, so ignore it for now, if I took it out, the engine functionality would not change and nothing more/less would break.)
Here's a quick picture showing all of this stuff:
This is just a quick comparison of what it would look like without normal mapping or the shadow layer:
Right, that explains how the engine works.
What I'm doing now, is simply allowing the user to save the world after editing with F5 and load the world with F9.
All I am able to do when I'm in the actual program is draw voxels in empty slots (where the alpha value is 0) with the left mouse button and remove them with the right mouse button. This is all working brilliantly (if a little slow, but we'll get to the ITextureEffect in a bit).
This is where all the creating buffered images and drawing stuff comes from.
In order to edit the texture data (the voxels) I need to be able to manually set pixels (or even better, texels) to a value. To do this, I'm having to keep a record of the image loaded for the textures and use it when drawing pixels on mouseclick.
I've got to get a graphics context for an editable image, draw the record image for the texture being edited onto it, draw what I'm drawing onto that (usually a single pixel, a line or subtracting some pixels) then reload that into a new texture. (As I said, this all works fine.)
This all worked perfectly when I did this in Slick2D as well, so I know the procedure, it's just messing up with things like replaceTexture for some reason.The saving is working. If I save my map and alt tab to where the 20 .pngs are saved, I can see the pixels have been edited.
For some reason when I load the images a 2nd time from file though, it seems to latch onto the old version as opposed to the new loaded version.
If you can see a much more concise and more efficient way to edit texture data that I can save to and load from files, I welcome it. - The method I'm using right now, while I understand what it's doing, it is really slow, and is having problems clearly.
(Could ITextureEffect achieve all this?)