Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - aZen

Pages: [1] 2
Projects / Voxel Engine
« on: March 07, 2014, 08:28:44 pm »
I'm planning to write my own software rendering engine for voxels, since unfortunately the speed of the JPCT software renderer (even with mesh optimization) doesn't cut it for me. Inspiration is drawn form Slab6,which uses software renderer and is <lightning> fast.

I was wondering if you could give me some input to get started @ EgonOlsen. Thank you!

Edit: Also, do you think it's a good idea? Do you think I could get similar rendering speed in Java compared to Slab6?

Support / FrameBuffer.getPixels()
« on: March 04, 2014, 11:46:55 pm »
Could you explain how the int array FrameBuffer.getPixels() is structured?

I'm having a hard time understanding how it works when oversampling. In normal mode I can simple access it in my software shader and everything works as expected. When using oversampling the pixels are "distorted" and I can not predict which pixel I am accessing. Let me clarify that a bit.

(click to enlarge)

So with normal sampling there is a one to one map from the zBuffer to the getPixels() buffer. With oversampling I was expecting a 4 -> 1 map, but that is not the case. In the "Oversampling" example in the picture I'm using a 2 -> 1 map (i.e. pixels[c/2] = zBuffer[c]). I can not figure out how to do it correctly and have already spent way to much time on it.

So my question is: How do the zBuffer and the getPixels() array relate when using oversampling?

Edit: This is what I get when using a 4->1 mapping.

(click to enlarge)

Support / Drawing with true alpha
« on: March 04, 2014, 12:48:02 am »
I've tried:

Code: [Select]
        Config.useFramebufferWithAlpha = true;
        FrameBuffer fb = new FrameBuffer(getWidth(), getHeight(), FrameBuffer.SAMPLINGMODE_OGSS);
        fb.clear(new Color(255,255,255,0));
        BufferedImage result = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_ARGB);
        Config.useFramebufferWithAlpha = false;
        return result;

But the alpha value is only "1" or "0", so the edges don't appear smooth. Is there an easy way to fix this?

Support / Request: Array Textures
« on: February 26, 2014, 10:57:22 am »
Would it be possible to add support for array textures to the software/hardware renderer? To me it seems that this could be a relative easy thing to do (at least for the software renderer). However this depends on how things are currently implemented.

I know that the desktop renderer is not your most used engine, however my voxel editor would greatly benefit from such a thing. Thank you for considering!

Support / Texture Limit
« on: February 19, 2014, 05:56:25 pm »
Is there a maximum number of textures that one can have? I'm investigating some issues and want to get the easy-to-check things out of the way first =)

Support / Rapid Texture loading/unloading
« on: February 10, 2014, 12:14:51 pm »
So, I have a problem when rapidly loading and unloading textures in a multithreaded environment with several viewports, worlds, etc

In particular the problem occurs when unloading a texture, then immediately loading it again (same image and name) and creating an object that uses the texture (all without redraw). The texture then appears without the texture and is colored in the setAdditionalColor(...) color. I suspect that the texture doesn't get unloaded from the FrameBuffer. However when I add another object with the same texture "a little bit later" (after redraw), it is textured fine.

I tried to create a simple test case, but no luck. The workaround I have now is that I use

Code: [Select]
        if (textureManager.containsTexture(name)) {
            textureManager.replaceTexture(name, texture);
        } else {
            textureManager.addTexture(name, texture);
            textureManager.replaceTexture(name, texture);

Will this cause a lot of extra load? How bad practice is this?

Thank you!

Support / Texture Memory Usage
« on: February 09, 2014, 07:02:27 am »
Is there an easy way to determine the texture memory usage in the software renderer? Trying to optimize textures and the would be quite helpful.

Support / Rendering Quality
« on: January 27, 2014, 10:23:46 pm »
Is there a way to create a frame buffer that allows for rendering with different sampling modes (quality)? Or do I have to create two instances (seems like it's redundant, memory wise)?

I would like to be able to render with low quality when frame rate is important and with high quality when quality is. Would that be possible to implement it?

Support / Triangle Count
« on: December 11, 2013, 11:38:47 pm »
Is there an easy way to access how many triangles were rendered in the previous frame (software renderer)?

I'd like to use it for my dynamic optimization algorithm (it's actually only an idea right now).


Support / Polyline
« on: December 09, 2013, 10:32:32 pm »
I'm trying to add a Polyline to the world. However it is never rendered. Does it not work with the software renderer? Or am I missing something obvious?

Code: [Select]
Polyline polyline = new Polyline(triangle, Color.BLACK);
addPolyline(polyline); the code that I'm using.

Thank you!

Support / Texture Interpolation Request
« on: December 09, 2013, 02:58:58 pm »
So when you use a texture and the uv position is right on the edge (when at least one position is zero or one), the outside of the texture is interpolated with black. That results in black pixels "on the edge" which looks awful. I was wondering if you could make an option so that the border color of the texture is used for interpolation (instead of black). E.g. the arbitrary point (-5,-7) would be interpolated with (0,0) of the texture and so on.

I'm not sure if this would be easy for you to implement, but it seems like something that should be done "on your side". Alternative I could "frame" the textures before adding them.

Please let me know!

Support / Texture Access
« on: December 09, 2013, 12:25:05 pm »
I'm dealing with a lot of textures that are changing rapidly. I'm wondering if it would be possible (or even make sense) to give access to the underlying pixel data of the texture.

Would it make sense to change the pixel data directly opposed to creating a new texture (I'm using the software renderer)? Would the change be visible if the texture is already loaded or would I need to reload it?

Goal is to have less memory allocation. I'm assuming that the underlying pixel data of the image that is used to instantiate the texture is duplicated into the texture. Is that correct or is a reference used?

Thank you!

Feedback / Tapatalk
« on: December 05, 2013, 01:02:23 pm »
Would you mind registering the forum with tapatalk? Would make things much easier for me (and probably others)!

Thank you!

Projects / VoxelShop - voxel editor
« on: November 11, 2013, 06:01:18 pm »
This project relies on JPCT on a large scale and I'm very grateful for it!

>> VoxelShop - a voxel editor with animation support

This currently uses the software renderer and I'm still planning to change that.

Please consider using the Blackflux forum if you want to give feedback. I will probably not be able to check this thread regularly in the future.


Support / Performance Questions
« on: May 18, 2013, 07:00:38 pm »
Two questions:

A) Would it be possible to render only certain parts of the whole FrameBuffer in software mode? I'm updating only certain, well defined areas and would like to avoid re-rendering everything every time I do an update. I've already experimented with moving the camera, but that is extremely tricky to get right.

B) Hit testing is a performance problem for me at a certain point (when I have a lot of objects). I'm assuming you're using some kind of index for all the triangles and then doing a hit-test with the ray? I'm using,%20com.threed.jpct.SimpleVector,%20float%29. Would it be possible/helpful to support this function with a set of possible objects or an area (using my own index)?

Thank you!

Pages: [1] 2