Integrating JPCT-AE with Vuforia

From JPCT

Jump to: navigation, search

Qualcomm's Vuforia engine is one of the most powerful Augmented Reality engines out there. It's integration with jPCT-AE is a wonderful combination for easily creating spectacular AR scenes with your Android device.

The integration process, though, might get a little messy and confusing for someone without much experience with matrices or scene graphs. This guide will try to detail how to achieve a quick integration in a step-by-step fashion.


Contents

Setting up the environment

First of all, you should follow Qualcomm’s Getting Started guide to set up your environment to use Vuforia. You should get to the point where you have run the ImageTargets demo app, since we will start from that sample.

Open the ImageTargets demo project on Eclipse. Right click on the project, then select Properties. Go to Java Build Path, then Libraries and add the jpct-ae.jar library using the Add External JARs… option.

Now we can start getting jpct-ae code into our app.

jPCT-AE and Vuforia working together

First, we will make jPCT-AE and Vuforia’s native code share the same GLsurface. For that, first open ImageTargetsRenderer.java under the ImageTargets sample app (package com.qualcomm.QCARSamples.ImageTargets).

This is the OpenGL renderer, and thus it’s where our jPCT code should be injected in. We will start by taking JPCT-AE Hello World sample app into this renderer.

First, create a constructor for ImageTargetsRenderer. This is for the Activity reference to be passed on to our renderer, instead of explicitly setting the attribute, as Qualcomm’s demo does. We will also init our scene here.

public ImageTargetsRenderer(ImageTargets activity){
        this.mActivity = activity;
        world = new World();
	world.setAmbientLight(20, 20, 20);

	sun = new Light(world);
	sun.setIntensity(250, 250, 250);

	// Create a texture out of the icon...:-)
	Texture texture = new Texture(BitmapHelper.rescale(BitmapHelper.convert(mActivity.getResources().getDrawable(R.drawable.ic_launcher)), 64, 64));
	TextureManager.getInstance().addTexture("texture", texture);

	cube = Primitives.getCube(10);
	cube.calcTextureWrapSpherical();
	cube.setTexture("texture");
	cube.strip();
	cube.build();

	world.addObject(cube);

        cam = world.getCamera();
        cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
        cam.lookAt(cube.getTransformedCenter());

	SimpleVector sv = new SimpleVector();
	sv.set(cube.getTransformedCenter());
	sv.y -= 100;
	sv.z -= 100;
	sun.setPosition(sv);
	MemoryHelper.compact();

    }

As you can see, I just copied and pasted the code in the Hello World demo and changed the icon reference. I also put the camera object as a field instead of a local variable. We will remove camera initialization later, since we will be handling camera dynamically in accordance to the marker's orientation. But for now, go to ImageTargets.java, then change the initialization of ImageTargetsRenderer to include the activity in the constructor, and remove the line setting this parameter immediately after.

Then go back to ImageTargetsRenderer, into the onSurfaceChanged method. This method is called whenever our surface changes size. We should put jpct-ae framebuffer initialization code here, right from the Hello World demo.

if (fb != null) {
     fb.dispose();
}
fb = new FrameBuffer(width, height);

NOTE: You can use OpenGL 1.0 or 2.0. If you have a phone that supports 2.0, Vuforia will fire a 2.0 surface and thus creating a framebuffer for 1.0 will cause the app to crash. For simplicity I am posting here the code that corresponds to OpenGLES 2.0 framebuffer initialization. You can add a OGL version check here and create the FB in consequence. If you want to use 1.0 only, you can force Vuforia to do so by going into Android.mk file under jni folder and setting the USE_OPENGL_ES_1_1 directive to true. Then use the FrameBuffer(gl, width, height) constructor instead of the one I used (which corresponds to OGLES2.0)


Okay, we have initialized our scene and framebuffer, but we have yet to tell jPCT to render it. This is done in the onDrawFrame() method. From the HelloWorld sample, paste the following code directly after the renderFrame() native call :

world.renderScene(fb);
world.draw(fb);
fb.display(); 

As you can see, I have omitted the fb.clear() line. This is because QCAR’s native openGL code already clears the framebuffer for us. If we did include this line, we would be clearing the video information from the camera, that the renderFrame() puts there for us.

Now fire the app, and you will see a cube over the camera scene.

Passing on the marker transformations

Now for the fun part, we need to modify the native codes. Open the file ImageTargets.cpp under the jni directory within your project (you can open it with Eclipse, it even has syntax highlighting). You should go directly to the frame rendering function, which due to JNI's naming convention has the awkward name of JNIEXPORT void JNICALL Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv *, jobject)

If you’re used to regular OpenGL code, the contents of this function will sound familiar to you. This is basically the render loop, where the framebuffer is cleant, the projection, model and view matrixes are calculated and the objects are displayed.

I am assuming that you won’t be needing Vuforia to render anything, since we’ll be leaving this job to jPCT-AE. So the first thing we’ll do is stripping this function to a few lines. This is what my renderFrame function looked like after removing the unnecessary code.

{
    // Clear color and depth buffer 
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    // Get the state from QCAR and mark the beginning of a rendering section
    QCAR::State state = QCAR::Renderer::getInstance().begin();
    // Explicitly render the Video Background
    QCAR::Renderer::getInstance().drawVideoBackground();
    // Did we find any trackables this frame?
    for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
    {
        // Get the trackable:
        const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
        const QCAR::Trackable& trackable = result->getTrackable();
        QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());        
    }
    QCAR::Renderer::getInstance().end();
}

Compile the native code by typing ndk-build from command line within the project directory.

If you run the app now, you will see that you no longer get a teapot if you face the marker with the camera. What we need to do now, is pass through the modelview matrix representing the marker position and orientation from the native code, into our java code. For that matter, we will create a method in ImageTargetsRenderer, that will receive the modelview matrix from the native code. I called this method updateModelViewMatrix, and its parameter is an array of 16 float (which is our 4x4 matrix).

public void updateModelviewMatrix(float mat[]) {
    modelViewMat = mat;
}

This method should be called for each frame to update the modelview matrix to the one originated from the marker's position and orientation. This means we need to call this method from the native code on each frame. Hence, the native code should first get ahold of this method, and then it should call it on each frame update.

To indicate the JNI code what is the method we should be calling, we first modify the renderFrame's method signature to give a name to its parameters:

JNIEXPORT void JNICALL Java_com_qualcomm_QCARSamples_ImageTargets_QCARFrameHandler_renderFrame(JNIEnv *env, jobject obj)
{
jclass activityClass = env->GetObjectClass(obj); //We get the class of out activity

Then we get ahold of the updateModelViewMatMethod with the following piece of code:

jmethodID updateMatrixMethod = env->GetMethodID(activityClass, "updateModelviewMatrix", "([F)V");

This indicates JNI to obtain a method named "updateModelviewMatrix", that receives an array of float as a single parameter ([F), and its return type is void (V). To better understand this syntax, you can refer to JNI official specification.

We have a grip on the method, now we need to set its parameter and then tell the native code to call it. To do this, we would use the following piece of code, for each marker that is detected:

jfloatArray modelviewArray = env->NewFloatArray(16);
for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
	// Get the trackable:
	const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
	const QCAR::Trackable& trackable = result->getTrackable();
	QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());

        '''SampleUtils::rotatePoseMatrix(180.0f, 1.0f, 0, 0, &modelViewMatrix.data[0]);
        // Passes the model view matrix to java
        env->SetFloatArrayRegion(modelviewArray, 0, 16, modelViewMatrix.data);
        env->CallVoidMethod(obj, updateMatrixMethod , modelviewArray);'''
}
env->DeleteLocalRef(modelviewArray);

Note the rotation that we apply to the modelViewMatrix. This is due to jPCT's coordinate system being rotated 180 degrees around the X axis with respect to Vuforia's. What we do here is perform that rotation to the matrix before sending it to jPCT-AE.

We then assign these matrix values to our JNI-friendly float array through the SetFloatArrayRegion, then call the updateMatrix method from our java code using that variable.

Applying the matrix to the camera

Now back to the Java code. We know have access to the marker's Model View Matrix returned by the AR engine. We just need to apply it to the camera, and we're set! (well, almost).

To do this, we will convert the array of float we defined earlier into jPCT-AE's own matrix class. We first create a Matrix object, then assign the raw float values using the setDump() method. After that, applying the matrix to the camera is just a matter of calling the setBack() method on the Camera object. I condensed all this within an updateCamera() method, which I call right after the renderFrame() call in the onDrawFrame() method:

public void updateCamera() {
	Matrix m = new Matrix();
	m.setDump(modelViewMat);
        cam.setBack(m);
}

And we must remove the lines of code configuring the camera from our ImageRenderer constructor:

//Remove this!
cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
cam.lookAt(cube.getTransformedCenter());

Launch your app now. You will be able to see the cube over the marker. We're done!

Setting up the FOV

Well, we're not. If you tried the application a little bit you will notice that it does not behave quite as expected. If you move the device from side to side, you will notice that the cube also moves a little bit to the sides. It just does not stay at the same spot like the teapot did in the original demo.

What's happening here is that the virtual camera does not actually have the same Field of View that the actual camera of your phone has. We need to set those to the same values in order for the app to behave as we expect. (If you want to know more about FOV, check out its Wikipedia entry).

Since every device is a different world, we can't just assume any FOV values, so we'll have to pass them as well from the native code. Fortunately, QCAR makes it easy to find out the horizontal and vertical FOV of our camera. Just using this code snippet will give us the values we need:

const QCAR::CameraCalibration& cameraCalibration = QCAR::CameraDevice::getInstance().getCameraCalibration();
QCAR::Vec2F size = cameraCalibration.getSize();
QCAR::Vec2F focalLength = cameraCalibration.getFocalLength();
float fovyRadians = 2 * atan(0.5f * size.data[1] / focalLength.data[1]);
float fovRadians = 2 * atan(0.5f * size.data[0] / focalLength.data[0]);

We need to use the same mechanism we used earlier to send these to our java code:

jmethodID fovMethod = env->GetMethodID(activityClass, "setFov", "(F)V");
jmethodID fovyMethod = env->GetMethodID(activityClass, "setFovy", "(F)V");

env->CallVoidMethod(obj, fovMethod, fovRadians);
env->CallVoidMethod(obj, fovyMethod, fovyRadians);

Of course, we need to define both setFov and setFovy methods in ImageTargetsRenderer, both receiving float values as a parameter.

After that, we add the following two lines to our updateCamera method:

	cam.setFovAngle(fov);
	cam.setYFovAngle(fovy);

And that's it. You can start now focusing on the 3D scene part.

If your objects seem to be rotating "oddly" (i.e. they seem to move when you rotate the device around them) they might be laying "behind" the position where the marker would be if it was present in the virtual scene. In the case of the cube in the Hello World app, this means translating the cube in the Z axis so it lays "above" the marker. Since our cube is 10 units big, translate it 10 units along the Z axis and you got it.


NOTE: alternatively, you can also obtain the Up, Right and Direction vectors separately, plus the camera position, using the functions provided by Vuforia:

	QCAR::Matrix44F inverseMV = SampleMath::Matrix44FInverse(modelViewMatrix);
	QCAR::Matrix44F invTranspMV = SampleMath::Matrix44FTranspose(inverseMV);

	//Camera position
	float cam_x = invTranspMV.data[12];
	float cam_y = invTranspMV.data[13];
	float cam_z = invTranspMV.data[14];

	//Camera orientation axis (camera viewing direction, camera right direction and camera up direction)
	float cam_right_x = invTranspMV.data[0];
	float cam_right_y = invTranspMV.data[1];
	float cam_right_z = invTranspMV.data[2];

	float cam_up_x = -invTranspMV.data[4];
	float cam_up_y = -invTranspMV.data[5];
	float cam_up_z = -invTranspMV.data[6];

	float cam_dir_x = invTranspMV.data[8];
	float cam_dir_y = invTranspMV.data[9];
	float cam_dir_z = invTranspMV.data[10];

and then setting them up in jPCT-AE like this:

	cam.setOrientation(mCameraDirection, mCameraUp);
	cam.setPosition(mCameraPosition));

Additional corrections

In some cases you might find that the results obtained by what we did above are not right yet. Most cameras have a 4:3 ratio, so vuforia automatically creates a video stream that matches this aspect ratio, even if your display is widescreen (16:10 or 16:9). The size of the stream sent by Vuforia is calculated taking the widest side of your screen as the baseline, and then extending the height to match the aspect ratio. For instance, in my Galaxy Nexus, which has a display of 1196x720, this video stream is sent at a resolution of 1196x897 (which corresponds to a 4:3 screen with a fixed width of 1196).

This might result in the 3D objects being slightly "displaced" from the markers position. This is due to the framebuffer being bigger than the actual display screen on the device.

What you need to do now is to account for this exceeding pixels, which in my case are 177 pixels (897-720). Of this portion of the framebuffer that can't be shown, half is discarded through the top, and half through the bottom. You'll probably find that the number of pixels that the object is "displaced" is half the amount of these pixels.

We can correct this by offsetting the viewport by that number on the vertical axis.

To do this, we need to use the beta version of JPCT-AE, which can be found here. To use this version we just replace the jar from the original stable version of JPCT-AE with this one.

Then we need to first tell JPCT-AE to enable the offsetting of the viewport by putting this line somewhere in your program initialization (the FrameRenderer constructor is fine):

Config.viewportOffsetAffectsRenderTarget=true;

We then need to normalize the offset before telling jpct-AE to use it. We do this by just dividing the amount of pixels by the total height of the screen (this gives us a number between 0 and 1, which is what normalizing means). So in the case of my Galaxy Nexus, this would be half the total amount of discarded pixels(177) which is ~88, and then normalizing this value (88/720) which gives us a value of 0.12.

Once we have calculated this value, we set it up with the following line of code:

Config.viewportOffsetY = 0.12;

Obviously this is hardcoding values, which would only make this solution valid for a specific device or those with similar configurations. We can generalize it by sending from Vuforia the actual size of the video stream. This size is calculated in the function configureVideoBackground, and set up in the config.mSize variable. Just send this value to the Java codes like we previously did with the matrix, then do the calculations out of the real display screen, which is done in Android with this method:

Point size = new Point();
getWindowManager().getDefaultDisplay().getSize(size);

Handling Portrait Mode

If you change the device orientation to portrait mode, you will notice that model is no longer positioned on the marker and has a peculiar movement when the camera is moved. The reason is again the Vuforia coordinate axis which locks with the landscape mode. To solve this we need to change the camera UP direction, interchange the horizontal and vertical FOV values and apply them.

Obtain the camera UP and RIGHT as above with the invTranspMV matrix:

	//Camera orientation axis (camera right direction and camera up direction)
	float cam_right_x = invTranspMV.data[4];
	float cam_right_y = invTranspMV.data[5];
	float cam_right_z = invTranspMV.data[6];

	float cam_up_x = -invTranspMV.data[0];
	float cam_up_y = -invTranspMV.data[1];
	float cam_up_z = -invTranspMV.data[2];

Note: Camera position and viewing direction remains the same.

Now, interchange the calculated horizontal and vertical FOV values. You can edit the code in 'Setting up the FOV' section to add condition for orientation.

        if(isActivityInPortraitMode)
        {
    	     env->CallVoidMethod(obj, fovYMethod, fovRadians);
       	     env->CallVoidMethod(obj, fovMethod, fovYRadians);
        }
        else
        {
    	     env->CallVoidMethod(obj, fovMethod, fovRadians);
             env->CallVoidMethod(obj, fovYMethod, fovYRadians);
        }

And that's it, we're done!

Personal tools