jPCT-AE - a 3d engine for Android > Support

[Tips] Android, augmented reality 3D with JPCT + Camera.

(1/15) > >>

All the code provide here is free to use and I'm not responsible for what you use it for.

I received a question that I found very interesting to share with everybody.

There is very few information about this subject on android and how to setup a correct working layout.

So in this topic, I'll answer the simple question : How to use Android Camera and JPCT-AE as a render that overlays the camera.
(Augmented reality concept)

You have to code your own engines around it to get it fully functional.

First we need to set up an XML layout.
Our minimum requirement is a glSurfaceView that's where we will draw 3D(JPCT engine),
and a SurfaceView to draw the camera preview.

--- Code: ---<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android=""
android:orientation="vertical" android:layout_width="fill_parent"

<android.opengl.GLSurfaceView android:id="@+id/glsurfaceview"
android:layout_width="fill_parent" android:layout_height="fill_parent" />

<SurfaceView android:id="@+id/surface_camera"
android:layout_width="fill_parent" android:layout_height="fill_parent"
android:layout_centerInParent="true" android:keepScreenOn="true" />
--- End code ---

This is to Initialize the window and the glSurfaceView.

--- Code: ---        // It talks from itself, please refer to android developer documentation.

        // Fullscreen is not necessary... it's up to you.

        // attach our glSurfaceView to the one in the XML file.
glSurfaceView = (GLSurfaceView) findViewById(;

--- End code ---

Now let's create the camera and the engine.
This is an example of my own code, so perhaps it won't fill exactly your needs,
but you can be inspired by this one.

The following code is pretty easy to understand,
I create a new camera and I give a render to my glSurfaceView
and of course set the Translucent window  (8888) pixel format and depth buffer to it.
(Without that your glSurfaceView will not support alpha channel and you will not see the camera layer.)

So basically :
1) Create the camera view.
2) Set up the glSurfaceView.
3) Set a Render to glSurfaceView.
4) Set the correct pixelformat to the glSurfaceView holder.

--- Code: ---try{
cameraView = new CameraView(this.getApplicationContext(),
(SurfaceView) findViewById(, imageCaptureCallback);
catch(Exception e){
// Translucent window 8888 pixel format and depth buffer
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);

        // GLEngine is a class I design to interact with JPCT and with all the basic function needed,
        // create a world, render it, OnDrawFrame event etc.
        glEngine = new GLEngine(getResources());

game = new Game(glEngine, (ImageView) findViewById(, getResources(), this

// Use a surface format with an Alpha channel:

// Start game

--- End code ---

Here is my CameraView class :

--- Code: ---package com.dlcideas.ARescue.Camera;


import com.threed.jpct.Logger;

import android.content.Context;
import android.hardware.Camera;
import android.view.SurfaceHolder;
import android.view.SurfaceView;

public class CameraView extends SurfaceView implements SurfaceHolder.Callback {
     * Create the cameraView and
     * @param context
     * @param surfaceView
    public CameraView(Context context, SurfaceView surfaceView,
   ImageCaptureCallback imageCaptureCallback) {

// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
previewHolder = surfaceView.getHolder();

// Hold the reference of the caputreCallback (null yet, will be changed
// on SurfaceChanged).
this.imageCaptureCallback = imageCaptureCallback;

     * Initialize the hardware camera. holder The holder
    public void surfaceCreated(SurfaceHolder holder) {
camera =;
try {
} catch (IOException e) {
   // TODO Auto-generated catch block

    public void surfaceDestroyed(SurfaceHolder holder) {

    public void surfaceChanged(SurfaceHolder holder, int format, int width,
   int height) {
if (previewRunning)

Camera.Parameters p = camera.getParameters();
p.setPreviewSize(width, height);
// camera.setParameters(p);

try {
} catch (IOException e) {
previewRunning = true;
Logger.log("camera callback huhihihihih", Logger.MESSAGE);
imageCaptureCallback = new ImageCaptureCallback(camera, width, height);


    public void onStop() {
// Surface will be destroyed when we return, so stop the preview.
// Because the CameraDevice object is not a shared resource, it's very
// important to release it when the activity is paused.
previewRunning = false;

    public void onResume() {
camera =;
previewRunning = true;

    private Camera camera;
    private SurfaceHolder previewHolder;
    private boolean previewRunning;
    private ImageCaptureCallback imageCaptureCallback;


--- End code ---

Thanks for the help, I'm sure this will be usefull to a few others as well.

The vital bit I needed was...


and to change my;

--- Code: ---            mGLView.setEGLConfigChooser(new GLSurfaceView.EGLConfigChooser() {
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
// Ensure that we get a 16bit framebuffer. Otherwise, we'll fall
// back to Pixelflinger on some device (read: Samsung I7500)
int[] attributes = new int[] { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE };
EGLConfig[] configs = new EGLConfig[1];
int[] result = new int[1];
egl.eglChooseConfig(display, attributes, configs, 1, result);
return configs[0];
--- End code ---


--- Code: ---mGLView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
--- End code ---

Only thing is, thats clearly a fixed solution. Id be worried about other device compatibility.

I think I saw a while back of how its possible to use camera tracking if you print out a page, then it will look like the object is sitting on the page. Any idea how to do that  :D :D

You'll need a specific library for that as its quite complex work.
If you google around you should be able to find some open source projects for it though. Theres a lot of rapid AR development at the moment, there's tones of open source projects.

Anyone know a good way to sycn camera angle in the code to the real camera's angle on the phone?

I know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.

Not sure how to turn this into a SimpleVector for my camera though.
I'm guessing maths is involved :P


--- Quote from: Darkflame on May 14, 2010, 11:20:30 pm ---I know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.
--- End quote ---
Now if only you could get exact GPS coordinates for the phone as well - with that and the angles, you could, for example, place a secret clue somewhere, and create a real-world treasure hunt game that people use their androids to play..


[0] Message Index

[#] Next page

Go to full version