Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - rhadamanthus

Pages: [1]
1
Support / Re: Using an object to render to the stencil buffer
« on: May 06, 2013, 09:13:42 pm »
Yup! That's exactly what I ended up doing. Thank you for the reply.
 I put the box in a separate world instance in order to render them independently. Is that how it is supposed to be used?

2
Support / Using an object to render to the stencil buffer
« on: May 05, 2013, 02:33:01 am »
Hello,
I am developing an augmented reality application. I successfully aligned a 3D box with an actual box that I see through the camera. The next thing I want to do is to use the 3D box to hide other 3D objects around it, so it would appear as if the other 3D objects were behind the real box in the image. (They should be occluded by the real box if they are behind it)

I think this can be done using the stencil buffer, however, I am not experienced enough to deduct what I need to do, neither in OpenGL nor in JPCT. Anybody has any hints about that?
I saw that IRenderHook can be used to inject code into the rendering pipeline, but I'm not exactly sure how this works or how to set different stencil modes. Is it that every time repeatRendering() returns true, the method afterRendering() is called?

Any help would be appreciated. thank you

3
Support / Re: Augmented reality using JPCT-AE with OpenCV
« on: May 03, 2013, 12:10:49 pm »
Thank you very much for the help. That was very helpful.
I apologize for not reading the documentation thoroughly enough. I'm way behind in the project and I didn't know where to begin.

4
Support / Re: Augmented reality using JPCT-AE with OpenCV
« on: May 03, 2013, 11:48:11 am »
It's the intrinsic parameters of the camera. f is the focal length.
It seems to me that it's what opengl calls the projection matrix. However, from the computer vision lectures and books it's supposed to be multiplied by a 3x4 transform matrix (camera transform) to get the projection matrix.I guess it's a terminology difference.

I have a few other questions about JPCT, if that's ok.
  • is the default cube primitive axis aligned?
  • how are the axes aligned? y is up, x is right, and z is towards screen?

5
Support / Augmented reality using JPCT-AE with OpenCV
« on: May 03, 2013, 05:25:11 am »
Hello,
I am using OpenCV to calibrate the camera, which gives me a 3x3 matrix of intrinsic parameters
Code: [Select]
fx 0  ox
0  fy oy
0  0  1

with the actual values as follows:
Code: [Select]
966.64154, 0.0      , 477.89288
0.0      , 966.64154, 363.23544
0.0      , 0.0      , 1.0

I also use OpenCV to compute the location of cubes with respect to the camera, meaning that I don't need to move the camera, only the cubes (1 or 2 for now).

I see that I can convert an OpenCV matrix directly into a row-major float array and use that in JPCT to move the cubes.
However, I'm not sure how to use the camera parameters matrix. It seems that I can't set the projection matrix directly in the Camera class. And I'm not exactly sure how to convert those parameters into something that the Camera class understands (like FOV, for example).

Any hints?

Pages: [1]