Difference between revisions of "Integrating JPCT-AE with Vuforia"

From JPCT
Jump to: navigation, search
Line 187: Line 187:
  
 
If you want to know more about FOV, check out its [http://en.wikipedia.org/wiki/Field_of_view Wikipedia entry].
 
If you want to know more about FOV, check out its [http://en.wikipedia.org/wiki/Field_of_view Wikipedia entry].
 +
 +
Since every device is a different world, we can't just assume any FOV values, so we'll have to pass them as well from the native code. Fortunately, QCAR makes it easy to find out the horizontal and vertical FOV of our camera. Just using this code snippet will give us the values we need:
 +
 +
<pre>
 +
const QCAR::CameraCalibration& cameraCalibration = QCAR::CameraDevice::getInstance().getCameraCalibration();
 +
QCAR::Vec2F size = cameraCalibration.getSize();
 +
QCAR::Vec2F focalLength = cameraCalibration.getFocalLength();
 +
float fovyRadians = 2 * atan(0.5f * size.data[1] / focalLength.data[1]);
 +
float fovRadians = 2 * atan(0.5f * size.data[0] / focalLength.data[0]);
 +
</pre>
 +
 +
We need use the same mechanism we used earlier to send these to our java code:
 +
<pre>
 +
jmethodID fovMethod = env->GetMethodID(activityClass, "setFov", "(F)V");
 +
jmethodID fovyMethod = env->GetMethodID(activityClass, "setFovy", "(F)V");
 +
 +
env->CallVoidMethod(obj, fovMethod, fovRadians);
 +
env->CallVoidMethod(obj, fovyMethod, fovyRadians);
 +
</pre>
 +
 +
Of course, we need to define both setFov and setFovy methods in ImageTargetsRenderer, both receiving float values as a parameter.
 +
 +
After that, we add the following two lines to our updateCamera method:
 +
<pre>
 +
cam.setFOV(fov);
 +
cam.setYFOV(fovy);
 +
</pre>
 +
 +
And that's it. You can start now focusing on the 3D scene part.
 +
 +
 +
NOTE: alternatively, you can also obtain the Up, Right and Direction vectors separately, plus the camera position, using the functions provided by Vuforia:
 +
 +
<pre>
 +
QCAR::Matrix44F inverseMV = SampleMath::Matrix44FInverse(modelViewMatrix);
 +
QCAR::Matrix44F invTranspMV = SampleMath::Matrix44FTranspose(inverseMV);
 +
 +
//Camera position
 +
float cam_x = invTranspMV.data[12];
 +
float cam_y = invTranspMV.data[13];
 +
float cam_z = invTranspMV.data[14];
 +
 +
//Camera orientation axis (camera viewing direction, camera right direction and camera up direction)
 +
float cam_right_x = invTranspMV.data[0];
 +
float cam_right_y = invTranspMV.data[1];
 +
float cam_right_z = invTranspMV.data[2];
 +
 +
float cam_up_x = -invTranspMV.data[4];
 +
float cam_up_y = -invTranspMV.data[5];
 +
float cam_up_z = -invTranspMV.data[6];
 +
 +
float cam_dir_x = invTranspMV.data[8];
 +
float cam_dir_y = invTranspMV.data[9];
 +
float cam_dir_z = invTranspMV.data[10];
 +
</pre>
 +
 +
and then setting them up in jPCT-AE like this:
 +
 +
<pre>
 +
cam.setOrientation(mCameraDirection, mCameraUp);
 +
cam.setPosition(mCameraPosition));
 +
</pre>

Revision as of 00:22, 4 February 2013

Qualcomm's Vuforia engine is one of the most powerfull Augmented Reality engines out there. It's integration with jPCT-AE is a wonderful combination for easily creating spectacular AR scenes with your Android device.

The integration process, though, might get a little messy and confusing for someone without much experience with matrices or scene graphs. This guide will try to detail how to achieve a quick integration in a step-by-step fashion.

First of all, you should follow Qualcomm’s Getting Started guide to set up your environment to use Vuforia.

You should get to the point where you have run the ImageTargets demo app, since we will start from that sample.

Setting up the environment

Open the ImageTargets demo project on Eclipse. Right click on the project, then select properties. Go to Java Build Path, then Libraries and add the jpct-ae.jar library using the Add External JARs… option.

Now we can start getting jpct-ae code into our app.

jPCT-AE and Vuforia working together

Now we can start getting jpct-ae code into our app.

First, we will make jpct-ae and vuforia’s native code share the same GLsurface. For that, first open ImageTargetsRenderer.java under the ImageTargets sample app (package com.qualcomm.QCARSamples.ImageTargets).

This is the OpenGL renderer, and thus it’s where our jPCT code should be injected in. We will start by taking JPCT-AE Hello World sample app into this renderer. First, create a constructor for ImageTargetsRenderer. This is for the Activity reference to be passed on to our renderer, instead of explicitly setting the attribute, as Qualcomm’s demo does. We will also init our scene here.

public ImageTargetsRenderer(ImageTargets activity){
        this.mActivity = activity;
        world = new World();
	world.setAmbientLight(20, 20, 20);

	sun = new Light(world);
	sun.setIntensity(250, 250, 250);

	// Create a texture out of the icon...:-)
	Texture texture = new Texture(BitmapHelper.rescale(BitmapHelper.convert(mActivity.getResources().getDrawable(R.drawable.ic_launcher)), 64, 64));
	TextureManager.getInstance().addTexture("texture", texture);

	cube = Primitives.getCube(10);
	cube.calcTextureWrapSpherical();
	cube.setTexture("texture");
	cube.strip();
	cube.build();

	world.addObject(cube);

	Camera cam = world.getCamera();
	cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
	cam.lookAt(cube.getTransformedCenter());

	SimpleVector sv = new SimpleVector();
	sv.set(cube.getTransformedCenter());
	sv.y -= 100;
	sv.z -= 100;
	sun.setPosition(sv);
	MemoryHelper.compact();

    }

As you can see, I just copied and pasted the code in the Hello World demo (and changed the icon reference). Now go to ImageTargets.java, then change the initialization of ImageTargetsRenderer to include the activity in the constructor, and remove the line setting this immediately after.

Then go back to ImageTargetsRenderer, into the onSurfaceChanged method. This method is called whenever our surface changes size. We should put jpct-ae framebuffer initialization code here, right from the Hello World demo.

if (fb != null) {
     fb.dispose();
}
fb = new FrameBuffer(width, height);

NOTE: You can use OpenGL 1.0 or 2.0. If you have a phone that supports 2.0, Vuforia will fire a 2.0 surface and thus creating a framebuffer for 1.0 will cause the app to crash. For simplicity I am posting here the code that corresponds to OpenGLES 2.0 framebuffer initialization. You can add a OGL version check here and create the FB in consequence. If you want to use 1.0 only, you can force Vuforia to do so by going into Android.mk file under jni folder and setting the USE_OPENGL_ES_1_1 directive to true.


Okay, we have initialized our scene and framebuffer, but we have yet to tell jpct to render it. This is done in the onDrawFrame method. From the HelloWorld sample, paste the following code directly after the renderFrame() native call :

world.renderScene(fb);
world.draw(fb);
fb.display(); 

As you can see, I have omitted the fb.clear() line. This is because QCAR’s native openGL code already clears the framebuffer for us. If we did include this line, we would be clearing the video information from the camera, that the renderFrame() puts there for us.

Now fire the app, and you will see a cube over the camera scene.

Passing on the marker transformations

Now for the fun part, we need to modify the native codes. Open the file ImageTargets.cpp under the jni directory within your project (you can open it with Eclipse, it even has syntax highlighting). You should go directly to the frame rendering function, which due to JNI's naming convention has the awkward name of JNIEXPORT void JNICALL Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv *, jobject)

If you’re used to regular OpenGL code, the contents of this function will sound familiar to you. This is basically the render loop, where the framebuffer is cleant, the projection, model and view matrixes are created and the objects are displayed.

I am assuming that you won’t be needing Vuforia to render anything, since we’ll be leaving this job to jPCT-AE. So the first thing we’ll do is stripping this function to a few lines. This is what my renderFrame function looked like after removing the unnecessary code.

{
    // Clear color and depth buffer 
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    // Get the state from QCAR and mark the beginning of a rendering section
    QCAR::State state = QCAR::Renderer::getInstance().begin();
    // Explicitly render the Video Background
    QCAR::Renderer::getInstance().drawVideoBackground();
    // Did we find any trackables this frame?
    for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
    {
        // Get the trackable:
        const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
        const QCAR::Trackable& trackable = result->getTrackable();
        QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());        
    }
    QCAR::Renderer::getInstance().end();
}

If you run the app now, you will see that you no longer get a teapot if you face the marker with the camera.

What we need to do now, is pass through the modelview matrix representing the marker position and orientation from the native code, into our java code.

For that matter, we will create a method in ImageTargetsRenderer, that will receive the modelview matrix from the native code. I called this method updateModelViewMatrix, and its parameters are an array of 16 float (which is our 4x4 matrix).

public void updateModelviewMatrix(float mat[]) {
    modelViewMat = mat;
}

This method should be called for each frame to update the modelview matrix to the one originated from the markers position and orientation. This means we need to call this method from the native code on each frame. Hence, the native code should first get ahold of this method, and then it should call it on each frame update.

To indicate the JNI code what is the method we should be calling, we first modify the renderFrame's method signature to give a name to its parameters:

JNIEXPORT void JNICALL Java_com_qualcomm_QCARSamples_ImageTargets_QCARFrameHandler_renderFrame(JNIEnv *env, jobject obj)

Then we get ahold of the updateModelViewMatMethod with the following piece of code:

jmethodID updateMatrixMethod = env->GetMethodID(activityClass, "updateModelviewMatrix", "([F)V");

This indicates JNI to obtain a method named "updateModelviewMatrix", that receives an array of float as parameter ([F), and its return type is void (V). To better understand this syntax, you can refer to JNI official specification.

We have a grip on the method, now we need to set its parameter and then call it.

To do this, we would use the following piece of code, for each marker that is detected:

jfloatArray modelviewArray = env->NewFloatArray(16);
for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
	// Get the trackable:
	const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
	const QCAR::Trackable& trackable = result->getTrackable();
	QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());

        '''SampleUtils::rotatePoseMatrix(180.0f, 1.0f, 0, 0, &modelViewMatrix.data[0]);
        // Passes the model view matrix to java
        env->SetFloatArrayRegion(modelviewArray, 0, 16, modelViewMatrix.data);
        env->CallVoidMethod(obj, updateMatrixMethod , modelviewArray);'''
}
env->DeleteLocalRef(modelviewArray);

Note the rotation that we apply to the modelViewMatrix. This is due to jPCT's coordinate system being rotated 180 degrees around the X axis with respect to Vuforia's. What we do here is perform that rotation to the matrix before sending it to jPCT-AE.

We then assign these matrix values to our JNI-friendly float array through the SetFloatArrayRegion, then call the updateMatrix method from our java code using that variable.


Applying the matrix to the camera

Now back to the Java code. We know have access to the marker's Model View Matrix returned by the AR engine. We just need to apply it to the camera, and we're set! (well, almost).

To do this, we will convert the array of float we defined earlier into jPCT-AE's own matrix class. We first create a Matrix object, then assign the raw float values using the setDump() method. After that, applying the matrix to the camera is just a matter of calling the setBack() method on the Camera object. I condensed all this within an updateCamera() method:

public void updateCamera() {
	Matrix m = new Matrix();
	m.setDump(modelViewMat);
        cam.setBack(m);
}

Launch your app now. You will be able to see the cube over the marker. We're done!

Setting up the FOV

Well, we're not. If you tried the application a little bit you will notice that it does not behave quite as expected. If you move the device from side to side, you will notice that the cube also moves a little bit to the sides. It just does not stay at the same spot like the teapot did in the original demo.

What's happening here is that the virtual camera does not actually have the same Field of View that the actual camera of your phone has. We need to set those to the same values in order for the app to behave as we expect.

If you want to know more about FOV, check out its Wikipedia entry.

Since every device is a different world, we can't just assume any FOV values, so we'll have to pass them as well from the native code. Fortunately, QCAR makes it easy to find out the horizontal and vertical FOV of our camera. Just using this code snippet will give us the values we need:

const QCAR::CameraCalibration& cameraCalibration = QCAR::CameraDevice::getInstance().getCameraCalibration();
QCAR::Vec2F size = cameraCalibration.getSize();
QCAR::Vec2F focalLength = cameraCalibration.getFocalLength();
float fovyRadians = 2 * atan(0.5f * size.data[1] / focalLength.data[1]);
float fovRadians = 2 * atan(0.5f * size.data[0] / focalLength.data[0]);

We need use the same mechanism we used earlier to send these to our java code:

jmethodID fovMethod = env->GetMethodID(activityClass, "setFov", "(F)V");
jmethodID fovyMethod = env->GetMethodID(activityClass, "setFovy", "(F)V");

env->CallVoidMethod(obj, fovMethod, fovRadians);
env->CallVoidMethod(obj, fovyMethod, fovyRadians);

Of course, we need to define both setFov and setFovy methods in ImageTargetsRenderer, both receiving float values as a parameter.

After that, we add the following two lines to our updateCamera method:

	cam.setFOV(fov);
	cam.setYFOV(fovy);

And that's it. You can start now focusing on the 3D scene part.


NOTE: alternatively, you can also obtain the Up, Right and Direction vectors separately, plus the camera position, using the functions provided by Vuforia:

	QCAR::Matrix44F inverseMV = SampleMath::Matrix44FInverse(modelViewMatrix);
	QCAR::Matrix44F invTranspMV = SampleMath::Matrix44FTranspose(inverseMV);

	//Camera position
	float cam_x = invTranspMV.data[12];
	float cam_y = invTranspMV.data[13];
	float cam_z = invTranspMV.data[14];

	//Camera orientation axis (camera viewing direction, camera right direction and camera up direction)
	float cam_right_x = invTranspMV.data[0];
	float cam_right_y = invTranspMV.data[1];
	float cam_right_z = invTranspMV.data[2];

	float cam_up_x = -invTranspMV.data[4];
	float cam_up_y = -invTranspMV.data[5];
	float cam_up_z = -invTranspMV.data[6];

	float cam_dir_x = invTranspMV.data[8];
	float cam_dir_y = invTranspMV.data[9];
	float cam_dir_z = invTranspMV.data[10];

and then setting them up in jPCT-AE like this:

	cam.setOrientation(mCameraDirection, mCameraUp);
	cam.setPosition(mCameraPosition));