Author Topic: Augmented reality using JPCT-AE with OpenCV  (Read 2981 times)

Offline rhadamanthus

  • byte
  • *
  • Posts: 5
    • View Profile
Augmented reality using JPCT-AE with OpenCV
« on: May 03, 2013, 05:25:11 am »
Hello,
I am using OpenCV to calibrate the camera, which gives me a 3x3 matrix of intrinsic parameters
Code: [Select]
fx 0  ox
0  fy oy
0  0  1

with the actual values as follows:
Code: [Select]
966.64154, 0.0      , 477.89288
0.0      , 966.64154, 363.23544
0.0      , 0.0      , 1.0

I also use OpenCV to compute the location of cubes with respect to the camera, meaning that I don't need to move the camera, only the cubes (1 or 2 for now).

I see that I can convert an OpenCV matrix directly into a row-major float array and use that in JPCT to move the cubes.
However, I'm not sure how to use the camera parameters matrix. It seems that I can't set the projection matrix directly in the Camera class. And I'm not exactly sure how to convert those parameters into something that the Camera class understands (like FOV, for example).

Any hints?
« Last Edit: May 03, 2013, 05:26:53 am by rhadamanthus »

Offline EgonOlsen

  • Administrator
  • quad
  • *****
  • Posts: 12295
    • View Profile
    • http://www.jpct.net
Re: Augmented reality using JPCT-AE with OpenCV
« Reply #1 on: May 03, 2013, 09:57:20 am »
What kind of matrix is that? What do f and o stand for?

Offline rhadamanthus

  • byte
  • *
  • Posts: 5
    • View Profile
Re: Augmented reality using JPCT-AE with OpenCV
« Reply #2 on: May 03, 2013, 11:48:11 am »
It's the intrinsic parameters of the camera. f is the focal length.
It seems to me that it's what opengl calls the projection matrix. However, from the computer vision lectures and books it's supposed to be multiplied by a 3x4 transform matrix (camera transform) to get the projection matrix.I guess it's a terminology difference.

I have a few other questions about JPCT, if that's ok.
  • is the default cube primitive axis aligned?
  • how are the axes aligned? y is up, x is right, and z is towards screen?
« Last Edit: May 03, 2013, 11:58:40 am by rhadamanthus »

Offline AugTech

  • int
  • **
  • Posts: 64
    • View Profile
    • Augmented Technologies
Re: Augmented reality using JPCT-AE with OpenCV
« Reply #3 on: May 03, 2013, 12:04:00 pm »
You could always use the Android camera parameters to get FOV values and then apply these to the WorldCamera...

In your camera class
Code: [Select]
        Camera.Parameters cameraParams = mCamera.getParameters();
        float hva = cameraParams.getHorizontalViewAngle();
        float vva = cameraParams.getVerticalViewAngle();

and then in the render class
Code: [Select]
Camera worldCamera = theWorld.getCamera();
float wid = worldCamera.convertDEGAngleIntoFOV( hvAngle );
float hig = worldCamera.convertDEGAngleIntoFOV( vvAngle );
worldCamera.setFOV( wid );
worldCamera.setYFOV( hig );

I use this method and it works fine except when rotating the device to portrait - The values should really be re-calculate when the camera activity is rotated (like in Wikitude), but I haven't implemented that as yet  :)

To answer one of your questions, the axis description is at http://www.jpct.net/wiki/index.php/Coordinate_system

Offline rhadamanthus

  • byte
  • *
  • Posts: 5
    • View Profile
Re: Augmented reality using JPCT-AE with OpenCV
« Reply #4 on: May 03, 2013, 12:10:49 pm »
Thank you very much for the help. That was very helpful.
I apologize for not reading the documentation thoroughly enough. I'm way behind in the project and I didn't know where to begin.