Diplaced objects after a call to removeRenderTarget()

Started by XOrDo, February 11, 2013, 03:06:03 PM

Previous topic - Next topic

XOrDo

Hi,

I have used render to texture in jpct-ae in the following way:
fb.setRenderTarget(TextureManager.getInstance().getTextureID("texMapeo"));
fb.clear(RGBColor.WHITE);
world.renderScene(fb);
world.draw(fb);
fb.display();
fb.removeRenderTarget();


I paint a plane to texture, and then I texture a different plane with this rendered texture.

It works fine but after calling fb.removeRenderTarget(); the plane painted to the framebuffer is slightly displaced from its position. If I comment the first render step the plane appears in the correct position.

Is there something that I'm missing?

EgonOlsen

I'm not sure if i got this correct...


  • You render plane into a texture
  • You use that texture to texture another plane without a render target being set
  • If you do this, the second plane isn't located where it's supposed to be
  • If you omit the render to texture step, it's located correctly

Is that what you mean? If so, have to checked if plane.getTransformedCenter() returns the same value in both cases?

XOrDo

Okay, I'm trying to do this, by rendering a "clipping plane" to a texture the same size as the framebuffer, then use that texture in a shader to draw only pixels that lay inside that plane.

I've tested the shader and it works OK, except the plane is displaced.

To prove this, I disabled the shader completely, and left out only the code creating the texture:

texturaMapping = new NPOTTexture(1196,897, RGBColor.WHITE);
TextureManager.getInstance().addTexture("texMapeoBig", texturaMapping);


the plane itself:

plano = createPlane(PLANE_WIDTH, PLANE_HEIGHT);
//plano = Primitives.getPlane(1,160);
plano.rotateX((float) Math.PI);
plano.setAdditionalColor(RGBColor.BLACK);
plano.strip();
plano.build();
plano.translate(new SimpleVector(0.0f, 0.0f, 0.0f));


function createPlane:
private static Object3D createPlane(float planeWidth, float planeHeight) {
Object3D plane = new Object3D(2);
float repeat = 4.0f;
plane.addTriangle(new SimpleVector(-planeWidth,planeHeight,0), 0f, 0f, new SimpleVector(planeWidth,planeHeight,0),repeat, 0f, new SimpleVector(-planeWidth,-planeHeight,0), 0f, repeat);
plane.addTriangle(new SimpleVector(planeWidth,planeHeight,0), repeat, 0f, new SimpleVector(planeWidth,-planeHeight,0), repeat, repeat, new SimpleVector(-planeWidth,-planeHeight,0), 0, repeat);
return plane;

}


..and the render to Texture part, which I posted. For the record, posting the whole render code:

fb.setRenderTarget(TextureManager.getInstance().getTextureID("texMapeoBig"));
fb.clear(RGBColor.WHITE);
world.renderScene(fb);
world.draw(fb);

fb.removeRenderTarget();

world.renderScene(fb);
world.draw(fb);
fb.display();


So, if I straight comment out the render to TExture part, the plane gets render in its proper position in the screen. If I comment it back in, it is displaced (and so it is in the texture).

I think this might be related with the framebuffer and texture sizes. I am currently enforcing an atypical fb and texture size (1196, 897), because by using vuforia in order to get the proportions right I need to set the fb resolution to that of the actual video stream. Now my device's real screen is 1280x720 so I'm guessing there's some resampling involved in either process that getting them out of sync.

I hope I explained myself.

EgonOlsen

If you are setting the frame buffer to a size different from the real, physical size jPCT-AE will calculate scale/fov based on these values but the result will be displayed on the real screen and clipped at the top (because origin is in the lower left corner). In your case, you'll scale/distort your graphics in y-direction and cut the upper part of it. That can't be what you actually want and it's most likely the reason for the problems you are experiencing. I suggest to use the real values as frame buffer dimensions and adjust the rest accordingly. I don't fully understand the reason for the resolution that you haven choosen and what this "video stream" is that you are talking about, but creating a frame buffer with anything but the real resolution doesn't make much sense.

kelmer

What he said is right in my case too (we're kind of working together). If you read the width and height of the camera video stream on vuforia, you get those odd resolution values (the camera and the video screen don't alwayts share the same dimensions, apparently). If you just create the FB with the values provided by on SurfaceChanged, then the objects will get stretched or shrank depending on how you hold your device (landscape or portrait): you can see the objects stretching or shrinking as you turn the device.

If you fix those values to those provided by the camera, they show up as they should.

EgonOlsen

But these resolutions are the ones used the camera for its images (which is why it's 4:3). If you once have a device that can create movies @ 4000*3000 (just an example...) but the screen resolution is 1280*800, you'll only see a tiny fraction of your actual scene because everything else will be clipped away...that can't be the right solution IMHO. Can't you just read the fov values from vuforia and apply these to the camera? Shouldn't that fix the problem (apart from the fact that 4:3 image doesn't fit on a 16:9 screen anyway...how is that solved...are there some borders around the camera's image?)?

kelmer

#6
Honestly, I don't quite get what's happening here "behind the scenes". We already get the fov/fovy values and set them up, but that does not do the trick.

If I create the framebuffer with the width and height from onSurfaceChanged, I get these stretching and shrinking of my objects, even after setting fov and fovy.

If I set the fb resolution values manually, then objects are displaced along x and y axis when I move the device, unless I also set the fov/fovy values, which is the only way they show up properly. I thought that settled it up temporarily at least, until we stumbled across this problem.






EgonOlsen

This only "fixes" the problem as long as camera and screen resolution are close together IMHO. If you would be using a 800*480 device, it'll blow.

I'm a bit in the dark here too, because i've no clue about vuforia, but because camera image and screen size don't match, i expect the result to look something like this:



I.e. the camera image doesn't cover the whole screen but a 4:3 part of it in the center. Other possibilities are that the camera image would be distorted or zoomed, so that the upper and lower parts will get clipped away. Which one is it?

IF its like in this picture, i think the solution should be to make the frame buffer's output cover that camera's image area and leave the area marked with "screen" alone. Do to this, you would have to calculate the width of that area (with height being the height of the device's screen) based on the camera's width and height. Use these values to create the new frame buffer (or use resize() on the old one) and calculate the relative offset of the left border of that area from the screen's border. Set this value in Config.viewportOffsetX. Set the fov values from the camera. Profit...or maybe not. I don't know, but it's worth a try.

kelmer

The camera image covers the whole screen. I checked and from what I can tell, the image looks the same as in the regular phone camera, without no apparent distortion. I am guessing that the image is zoomed and some camera value info is lost (i.e. clipped away) in the process, as you say, so the solution you suggest won't work, would it?


EgonOlsen

#9
It should work too, just with everything flipped, i.e. width becomes height and the relative offset has to be negative (Edit: and in y-direction of course).

kelmer

Thanks, we'll give it a go. I would try to find out a proper solution though, and update it on the wiki.

kelmer

There isn't a resize method in the FrameBuffer object. Maybe it isn't supported by OpenGL ES either?



kelmer

#14
Well I finally got the time to struggle a bit with the problem and I think I finally understand the basics of it, with the help of your posts.

Vuforia's native code requests the camera image of my Galaxy Nexus at a resolution of 640x480. Then, to fill the actual display size of my device, it resamples this image taking the width of the screen (1196) as the baseline, which results in a 4:3 resampled video stream of 1196,897 (as opposted to the 1196x720, 16:10 ratio of the actual display).

What I don't understand is the renderToTexture process in JPCT-AE.

In order for both things to match, I create a framebuffer of 1196x897 size (hardcoding the values for this device, just for the sake of simplicity). I then create an NPOT texture of exactly the same size of 1196x897.


Then, on my onDrawFrame, I just render first to texture, then remove the render target and render to the screen. No further processing is done, just rendering first on texture, then on teh screen (I don't have any shaders, just a plane that gets rendered in both the texture and the display). This texture is not used at all anywhere on the code.

Doing this, objects get displaced. Yet if I don't render first to texture, my objects are placed correctly over the target.

Shouldn't the framebuffer (which is "shared" between the texture I render to and the display) be left untouched? I know the plane is untouched, I checked its position before and after rendering to texture and it's exactly the same. How come rendering first to a texture modifies the behavior of later rendering to screen?

Moreso, if I add some other objects to the world, those get displaced to, even if they aren't rendered to the texture.

This is a capture without the rendering to texture:

host images

And this is a screencap after rendering to texture:

image hosting sites

Anyway, following your chain of thought and your advice I tried accounting for this vertical displacement by resizing the fb again after rendering to texture then applying a vertical offset of 177 (897-720) but then I won't see anything on my marker anymore :(