All the code provide here is free to use and I'm not responsible for what you use it for.
I received a question that I found very interesting to share with everybody.
There is very few information about this subject on android and how to setup a correct working layout.
So in this topic, I'll answer the simple question : How to use Android Camera and JPCT-AE as a render that overlays the camera.
(Augmented reality concept)
== ALL THE SOURCES PROVIDED IN CODE QUOTES ARE NOT COMPLETE ! ==
You have to code your own engines around it to get it fully functional.
First we need to set up an XML layout.
Our minimum requirement is a glSurfaceView that's where we will draw 3D(JPCT engine),
and a SurfaceView to draw the camera preview.
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="fill_parent"
android:layout_height="fill_parent">
<android.opengl.GLSurfaceView android:id="@+id/glsurfaceview"
android:layout_width="fill_parent" android:layout_height="fill_parent" />
<SurfaceView android:id="@+id/surface_camera"
android:layout_width="fill_parent" android:layout_height="fill_parent"
android:layout_centerInParent="true" android:keepScreenOn="true" />
</FrameLayout>
This is to Initialize the window and the glSurfaceView.
// It talks from itself, please refer to android developer documentation.
getWindow().setFormat(PixelFormat.TRANSLUCENT);
// Fullscreen is not necessary... it's up to you.
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
setContentView(R.layout.THE_XML_LAYOUT_CREATED_BEFORE);
// attach our glSurfaceView to the one in the XML file.
glSurfaceView = (GLSurfaceView) findViewById(R.id.glsurfaceview);
Now let's create the camera and the engine.
This is an example of my own code, so perhaps it won't fill exactly your needs,
but you can be inspired by this one.
The following code is pretty easy to understand,
I create a new camera and I give a render to my glSurfaceView
and of course set the Translucent window (8888) pixel format and depth buffer to it.
(Without that your glSurfaceView will not support alpha channel and you will not see the camera layer.)
So basically :
1) Create the camera view.
2) Set up the glSurfaceView.
3) Set a Render to glSurfaceView.
4) Set the correct pixelformat to the glSurfaceView holder.
try{
cameraView = new CameraView(this.getApplicationContext(),
(SurfaceView) findViewById(R.id.surface_camera), imageCaptureCallback);
}
catch(Exception e){
e.printStackTrace();
}
// Translucent window 8888 pixel format and depth buffer
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
// GLEngine is a class I design to interact with JPCT and with all the basic function needed,
// create a world, render it, OnDrawFrame event etc.
glEngine = new GLEngine(getResources());
glSurfaceView.setRenderer(glEngine);
game = new Game(glEngine, (ImageView) findViewById(R.id.animation_screen), getResources(), this
.getBaseContext());
// Use a surface format with an Alpha channel:
glSurfaceView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
// Start game
game.start();
Here is my CameraView class :
package com.dlcideas.ARescue.Camera;
import java.io.IOException;
import com.threed.jpct.Logger;
import android.content.Context;
import android.hardware.Camera;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
public class CameraView extends SurfaceView implements SurfaceHolder.Callback {
/**
* Create the cameraView and
*
* @param context
* @param surfaceView
*/
public CameraView(Context context, SurfaceView surfaceView,
ImageCaptureCallback imageCaptureCallback) {
super(context);
// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
previewHolder = surfaceView.getHolder();
previewHolder.addCallback(this);
previewHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
//previewHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
// Hold the reference of the caputreCallback (null yet, will be changed
// on SurfaceChanged).
this.imageCaptureCallback = imageCaptureCallback;
}
/**
* Initialize the hardware camera. holder The holder
*/
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
try {
camera.setPreviewDisplay(holder);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
/**
*
*/
public void surfaceDestroyed(SurfaceHolder holder) {
this.onStop();
}
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
if (previewRunning)
camera.stopPreview();
Camera.Parameters p = camera.getParameters();
p.setPreviewSize(width, height);
// camera.setParameters(p);
try {
camera.setPreviewDisplay(holder);
} catch (IOException e) {
e.printStackTrace();
}
previewRunning = true;
Logger.log("camera callback huhihihihih", Logger.MESSAGE);
camera.startPreview();
imageCaptureCallback = new ImageCaptureCallback(camera, width, height);
//camera.startPreview();
}
public void onStop() {
// Surface will be destroyed when we return, so stop the preview.
// Because the CameraDevice object is not a shared resource, it's very
// important to release it when the activity is paused.
imageCaptureCallback.stopImageProcessing();
camera.setPreviewCallback(null);
camera.stopPreview();
previewRunning = false;
camera.release();
}
public void onResume() {
camera = Camera.open();
camera.setPreviewCallback(imageCaptureCallback);
previewRunning = true;
}
private Camera camera;
private SurfaceHolder previewHolder;
private boolean previewRunning;
private ImageCaptureCallback imageCaptureCallback;
}
Thanks for the help, I'm sure this will be usefull to a few others as well.
The vital bit I needed was...
mGLView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
and to change my;
mGLView.setEGLConfigChooser(new GLSurfaceView.EGLConfigChooser() {
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
// Ensure that we get a 16bit framebuffer. Otherwise, we'll fall
// back to Pixelflinger on some device (read: Samsung I7500)
int[] attributes = new int[] { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE };
EGLConfig[] configs = new EGLConfig[1];
int[] result = new int[1];
egl.eglChooseConfig(display, attributes, configs, 1, result);
return configs[0];
}
});
to
mGLView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
Only thing is, thats clearly a fixed solution. Id be worried about other device compatibility.
I think I saw a while back of how its possible to use camera tracking if you print out a page, then it will look like the object is sitting on the page. Any idea how to do that :D :D
You'll need a specific library for that as its quite complex work.
If you google around you should be able to find some open source projects for it though. Theres a lot of rapid AR development at the moment, there's tones of open source projects.
---
Anyone know a good way to sycn camera angle in the code to the real camera's angle on the phone?
I know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.
Not sure how to turn this into a SimpleVector for my camera though.
I'm guessing maths is involved :P
Quote from: Darkflame on May 14, 2010, 11:20:30 PMI know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.
Now if only you could get exact GPS coordinates for the phone as well - with that and the angles, you could, for example, place a secret clue somewhere, and create a real-world treasure hunt game that people use their androids to play..
Thats exactly my goal.
Or, rather, allowing anyone to place messages tied to real locations and share them with anyone else :)
This is why I'm using Wave-servers as a back-end, it lets people have a kinda "social" AR. They can share their posts with either individuals, groups, or the public at large.
I already got the system working on PC's with a google map style client;
http://arwave.org/ (see video)
That was more or less to prove the concept. (though as its made in qt, porting later to nokia phones shouldn't be too hard).
Now I want to make a full AR one.
Anyone know how to use the orientation sensors to set the JPCT camera to corrispond?
Ive been looking at the sourcecode of Mixare;
http://code.google.com/p/mixare/source/checkout
for guidelines.
The problem comes that they are using their own engine, and thus the rotations arnt in the needed format.
Heres what Ive tried;
@Override
public void onSensorChanged(SensorEvent evt) {
try
{
if (evt.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
grav[0] = evt.values[0];
grav[1] = evt.values[1];
grav[2] = evt.values[2];
arView.postInvalidate();
} else if (evt.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
mag[0] = evt.values[0];
mag[1] = evt.values[1];
mag[2] = evt.values[2];
arView.postInvalidate();
}
SensorManager.getRotationMatrix(RTmp, I, grav, mag);
//SensorManager.remapCoordinateSystem(RTmp, SensorManager.AXIS_X, SensorManager.AXIS_MINUS_Z, Rt);
Rt=RTmp;
tempR.setRow(0, Rt[0], Rt[1], Rt[2],0);
tempR.setRow(1, Rt[3], Rt[4], Rt[5],0);
tempR.setRow(2, Rt[6], Rt[7], Rt[8],0);
tempR.setRow(3, 0, 0, 0,1);
Log.i("--", Rt[0] +" "+ Rt[1] +" "+ Rt[2]);
Log.i("--", Rt[3] +" "+ Rt[4] +" "+ Rt[5]);
Log.i("--", Rt[6] +" "+ Rt[7] +" "+ Rt[8]);
arView.setCameraOrentation(tempR);
}
catch (Exception ex)
{
Log.e("Sensor", "ProcessingError", ex);
}
}
The function "setCameraOrentation" just leads to a
world.getCamera().setBack(RotMatrix);
Where RotMatrix is the Matrix passed to it.
What I'm not sure of is how Androids SensorManager.getRotationMatrix() matrix corresponds to the JCPTs "setBack" matrix :-/
I know one is 3x3 and the other is 4x4...but I think I dealt with that correct, so I'm not sure whats wrong now :?
The jcpt camera moves on rotations, but clearly not correctly. (or any simple angular displacement).
Theres also this code here which I've tried with a similar lack of success;
http://mysticlakesoftware.blogspot.com/2009/07/sensor-accelerometer-magnetics.html
This code features a filtering function, which is nice, but I still cant match the v
- output to the Jpct camera.
Also this code seems very slow compared to the above.
If the matrix from Android is similar to what OpenGL uses, it's most likely column major, while jPCT's matrices are row major. You have to convert them by making rows to cols. The easiest way is to create a float[16]-array for Matrix.setDump() and fill it accordingly. So that
a d g
b e h
c f j
becomes
a b c 0
d e f 0
g h j 0
0 0 0 1
In addition, you have to convert between the coordinate systems. You can either do this by rotating the matrix 90° around the x-axis or by negating the second and third column of the matrix (can be done when filling the array anyway). The next release will include a method that does this conversion.
I thought that at first but.....
Quote
Each matrix is returned either as a 3x3 or 4x4 row-major matrix depending on the length of the passed array:
If the array length is 16:
/ M[ 0] M[ 1] M[ 2] M[ 3] \
| M[ 4] M[ 5] M[ 6] M[ 7] |
| M[ 8] M[ 9] M[10] M[11] |
\ M[12] M[13] M[14] M[15] /
This matrix is ready to be used by OpenGL ES's glLoadMatrixf(float[], int).
Note that because OpenGL matrices are column-major matrices you must transpose the matrix before using it. However, since the matrix is a rotation matrix, its transpose is also its inverse, conveniently, it is often the inverse of the rotation that is needed for rendering; it can therefore be used with OpenGL ES directly.
Also note that the returned matrices always have this form:
/ M[ 0] M[ 1] M[ 2] 0 \
| M[ 4] M[ 5] M[ 6] 0 |
| M[ 8] M[ 9] M[10] 0 |
\ 0 0 0 1 /
If the array length is 9:
/ M[ 0] M[ 1] M[ 2] \
| M[ 3] M[ 4] M[ 5] |
\ M[ 6] M[ 7] M[ 8] /
so that's right isn't it?
I guess its the varying co-ordinate system causing the problem?
Also, where's the most efficient place to put the
world.getCamera().setBack(CameraMatrix);
?
Camera rotation is only every second or more at the moment, so I'm trying to work out what part of my code is slowing it down.
Then just try to apply a rotateX((float)Math.PI) on the matrix (the 90° i wrote in my former post are of course wrong, it has to 180). Or maybe you have to invert it in addition to be useful? Keep in mind that a camera transformation is actually an inverse world transformation. How are you creating the jPCT Matrix? Have you ensured that the result really looks like
/ M[ 0] M[ 1] M[ 2] 0 \
| M[ 4] M[ 5] M[ 6] 0 |
| M[ 8] M[ 9] M[10] 0 |
\ 0 0 0 1 /
?
The place where you set the Matrix shouldn't matter. It's not expensive. Just try to avoid object creation where possible, i.e. don't create a new Matrix each frame if possible.
Its basically;
SensorManager.getRotationMatrix(Rt, I, accels, mags);
Matrix tempR = new Matrix();
tempR.setRow(0, Rt[0], Rt[1], Rt[2],0);
tempR.setRow(1, Rt[3], Rt[4], Rt[5],0);
tempR.setRow(2, Rt[6], Rt[7], Rt[8],0);
tempR.setRow(3, 0, 0, 0,1);
tempR.rotateX((float)Math.PI);
arView.setCameraOrentation(tempR);
Is this correct?
Rt[] is supposed to be a 3x3 returned by the get RotationMatrix according to google's documentation. (I defined it as Rt[9]).
Something is still very wrong unfortunately. I haven't inverted it yet, but when laid flat on the table using the above code I get this;
The diagonal angle is odd no? (this is basically a simplified version of the demo scene)
[attachment deleted by admin]
Quote from: paulscode on May 17, 2010, 01:03:47 AM
Quote from: Darkflame on May 14, 2010, 11:20:30 PMI know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.
Now if only you could get exact GPS coordinates for the phone as well - with that and the angles, you could, for example, place a secret clue somewhere, and create a real-world treasure hunt game that people use their androids to play..
Yes that would bee a cool idea :) But another project XD
Not really, if I can get my system working, Treasure hunts will be possible :)
Ok, I decided to give this another go, and I'm determined to get it right :)
I'm going to go step by step this time and be as methodical as possible.
I'm currently just bliting a rotation matrix onto the screen.
The source of which is "RT" from;
SensorManager.getRotationMatrix(Rt, I, accels, mags);
Where accels and mags are from the sensors.
Question;
How often should this update? It seems to be far too slow to be any use.
I mean, once every 2-3 seconds it gives a new matrix out, even though its triggered hundreds of times inbetween.
Does anyone successfully use getRotationMatrix from here? What speeds does it update? I'm using a HTC Legend, Android 2.1.
Is it possible Android Camera and JPCT-AE as a render that overlays the camera???
I don't think so.When I initialize
World world = new World();
it render a black screen itself...
I haven't seen any function function in World class that I can change it black color to make it transparent.
There is function in FrameBuffer Class
public void blit(int[] src,....... ,boolean transparent)
by this I can have the transparent background....But the problem I am facing that is how can i get the first parameter of this function the current renderer texture??
You can simply clear the framebuffer with a color that has an alpha value assigned (http://www.jpct.net/jpct-ae/download/alpha/doc/com/threed/jpct/FrameBuffer.html#clear(com.threed.jpct.RGBColor) (http://www.jpct.net/jpct-ae/download/alpha/doc/com/threed/jpct/FrameBuffer.html#clear(com.threed.jpct.RGBColor))). That will give you the transparent background if everything else is setup correctly.
EgonOlsen Thanks for your reply...
But still I am not able to make transparent background :'(
I will be very glad if you fix my problem.....
Into the onDrawFrame function I have Written the following code
fb.clear();
world.renderScene(fb);
world.draw(fb);
fb.blit(fb.getPixels(), this.widht, this.height, 0, 0, this.widht, this.height, this.widht, this.height, true);
blitNumber(lfps, 5, 5);
fb.display();
still I am not able to find any solution...
My total code is
package com.HelloAndroid;
import java.security.PublicKey;
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.egl.EGLDisplay;
import javax.microedition.khronos.opengles.GL10;
import android.app.Activity;
import android.content.res.Resources;
import android.graphics.Color;
import android.graphics.PixelFormat;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.util.FloatMath;
import android.util.Log;
import android.view.KeyEvent;
import android.view.MotionEvent;
import com.threed.jpct.Camera;
import com.threed.jpct.Config;
import com.threed.jpct.FrameBuffer;
import com.threed.jpct.GenericVertexController;
import com.threed.jpct.Interact2D;
import com.threed.jpct.Light;
import com.threed.jpct.Loader;
import com.threed.jpct.Logger;
import com.threed.jpct.Object3D;
import com.threed.jpct.Primitives;
//import com.threed.jpct.R;
import com.threed.jpct.RGBColor;
import com.threed.jpct.SimpleVector;
import com.threed.jpct.Texture;
import com.threed.jpct.TextureManager;
import com.threed.jpct.World;
/**
* A simple demo. This shows more how to use jPCT-AE than it shows how to write
* a proper application for Android, because i have no idea how to do this. This
* thing is more or less a hack to get you started...
*
* @author EgonOlsen
*
*/
public class HelloAndroid extends Activity {
private GLSurfaceView mGLView;
private MyRenderer renderer;
private FrameBuffer fb = null;
private World world = null;
private int move = 0;
private float turn = 0;
private boolean paused = false;
public Object3D testObj;
private float touchTurn = 0;
private float touchTurnUp = 0;
private float xpos = -1;
private float ypos = -1;
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mGLView = new GLSurfaceView(this);
mGLView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
renderer = new MyRenderer();
mGLView.setRenderer(renderer);
mGLView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
setContentView(mGLView);
}
@Override
protected void onPause() {
paused = true;
super.onPause();
mGLView.onPause();
}
@Override
protected void onResume() {
paused = false;
super.onResume();
mGLView.onResume();
}
protected void onStop() {
renderer.stop();
super.onStop();
}
public boolean onTouchEvent(MotionEvent me) {
if (me.getAction() == MotionEvent.ACTION_DOWN) {
xpos = me.getX();
ypos = me.getY();
//testObj = renderer.PickObj(xpos, ypos);
//Log.d("hello", testObj.getName());
return true;
}
if (me.getAction() == MotionEvent.ACTION_UP) {
xpos = -1;
ypos = -1;
touchTurn = 0;
touchTurnUp = 0;
return true;
}
if (me.getAction() == MotionEvent.ACTION_MOVE) {
float xd = me.getX() - xpos;
float yd = me.getY() - ypos;
xpos = me.getX();
ypos = me.getY();
touchTurn = xd / 100f;
touchTurnUp = yd / 100f;
return true;
}
return super.onTouchEvent(me);
}
public boolean onKeyDown(int keyCode, KeyEvent msg) {
if (keyCode == KeyEvent.KEYCODE_W) {
move = 2;
return true;
}
if (keyCode == KeyEvent.KEYCODE_S) {
move = -2;
return true;
}
if (keyCode == KeyEvent.KEYCODE_D) {
turn = 0.05f;
return true;
}
if (keyCode == KeyEvent.KEYCODE_A) {
turn = -0.05f;
return true;
}
return super.onKeyDown(keyCode, msg);
}
public boolean onKeyUp(int keyCode, KeyEvent msg) {
if (keyCode == KeyEvent.KEYCODE_W) {
move = 0;
return true;
}
if (keyCode == KeyEvent.KEYCODE_S) {
move = 0;
return true;
}
if (keyCode == KeyEvent.KEYCODE_D) {
turn = 0;
return true;
}
if (keyCode == KeyEvent.KEYCODE_A) {
turn = 0;
return true;
}
return super.onKeyUp(keyCode, msg);
}
protected boolean isFullscreenOpaque() {
return true;
}
class MyRenderer implements GLSurfaceView.Renderer {
private Object3D plane = null;
private Object3D tree2 = null;
private Object3D tree1 = null;
private Object3D grass = null;
private Texture font = null;
private int fps = 0;
private int lfps = 0;
private long time = System.currentTimeMillis();
private Light sun = null;
private Object3D rock = null;
private boolean stop = false;
private float ind;
private boolean deSer = false;
private int height;
private int widht;
RGBColor transc = new RGBColor(0, 0, 999);
public MyRenderer() {
Config.maxPolysVisible = 5000;
Config.farPlane = 1500;
}
public void stop() {
stop = true;
if (fb != null) {
fb.dispose();
fb = null;
}
}
public void onSurfaceChanged(GL10 gl, int w, int h) {
if (fb != null) {
fb.dispose();
}
fb = new FrameBuffer(gl, w, h);
this.widht = w;
this.height = h;
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
TextureManager.getInstance().flush();
world = new World();
Resources res = getResources();
TextureManager tm = TextureManager.getInstance();
Texture grass2 = new Texture(res.openRawResource(R.raw.grassy));
Texture leaves = new Texture(res.openRawResource(R.raw.tree2y));
Texture leaves2 = new Texture(res.openRawResource(R.raw.tree3y));
Texture rocky = new Texture(res.openRawResource(R.raw.rocky));
Texture planetex = new Texture(res.openRawResource(R.raw.planetex));
font = new Texture(res.openRawResource(R.raw.numbers));
tm.addTexture("grass2", grass2);
tm.addTexture("leaves", leaves);
tm.addTexture("leaves2", leaves2);
tm.addTexture("rock", rocky);
tm.addTexture("grassy", planetex);
// Use the normal loaders...
plane = Primitives.getPlane(20, 30);
grass = Loader.load3DS(res.openRawResource(R.raw.grass), 5)[0];
rock = Loader.load3DS(res.openRawResource(R.raw.rock), 15f)[0];
tree1 = Loader.load3DS(res.openRawResource(R.raw.tree2), 5)[0];
tree2 = Loader.load3DS(res.openRawResource(R.raw.tree3), 5)[0];
plane.setTexture("grassy");
rock.setTexture("rock");
grass.setTexture("grass2");
tree1.setTexture("leaves");
tree2.setTexture("leaves2");
tree1.setName("HelloOBJ");
///testing collution
tree1.setCollisionMode(Object3D.COLLISION_CHECK_OTHERS);
plane.getMesh().setVertexController(new Mod(), false);
plane.getMesh().applyVertexController();
plane.getMesh().removeVertexController();
grass.translate(-45, -17, -50);
grass.rotateZ((float) Math.PI);
rock.translate(0, 0, -90);
rock.rotateX(-(float) Math.PI / 2);
tree1.translate(-50, -92, -50);
tree1.rotateZ((float) Math.PI);
tree2.translate(60, -95, 10);
tree2.rotateZ((float) Math.PI);
plane.rotateX((float) Math.PI / 2f);
plane.setName("plane");
tree1.setName("tree1");
tree2.setName("tree2");
grass.setName("grass");
rock.setName("rock");
world.addObject(plane);
//world.addObject(tree1);
//world.addObject(tree2);
//world.addObject(grass);
//world.addObject(rock);
RGBColor dark = new RGBColor(100, 100, 100);
grass.setTransparency(10);
tree1.setTransparency(0);
tree2.setTransparency(0);
tree1.setAdditionalColor(dark);
tree2.setAdditionalColor(dark);
grass.setAdditionalColor(dark);
world.setAmbientLight(20, 20, 20);
world.buildAllObjects();
sun = new Light(world);
Camera cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 250);
cam.moveCamera(Camera.CAMERA_MOVEUP, 100);
cam.lookAt(plane.getTransformedCenter());
cam.setFOV(1.5f);
sun.setIntensity(250, 250, 250);
SimpleVector sv = new SimpleVector();
sv.set(plane.getTransformedCenter());
sv.y -= 300;
sv.x -= 100;
sv.z += 200;
sun.setPosition(sv);
}
public void onDrawFrame(GL10 gl) {
try {
if (!stop) {
if (paused) {
Thread.sleep(500);
} else {
Camera cam = world.getCamera();
if (turn != 0) {
world.getCamera().rotateY(-turn);
}
if (touchTurn != 0) {
world.getCamera().rotateY(touchTurn);
touchTurn = 0;
}
if (touchTurnUp != 0) {
world.getCamera().rotateX(touchTurnUp);
touchTurnUp = 0;
}
if (move != 0) {
world.getCamera().moveCamera(cam.getDirection(), move);
}
fb.clear();
world.renderScene(fb);
world.draw(fb);
fb.blit(fb.getPixels(), this.widht, this.height, 0, 0, this.widht, this.height, this.widht, this.height, true);
blitNumber(lfps, 5, 5);
fb.display();
sun.rotate(new SimpleVector(0, 0.05f, 0), plane.getTransformedCenter());
if (System.currentTimeMillis() - time >= 1000) {
lfps = (fps + lfps) >> 1;
fps = 0;
time = System.currentTimeMillis();
}
fps++;
ind += 0.02f;
if (ind > 1) {
ind -= 1;
}
}
} else {
if (fb != null) {
fb.dispose();
fb = null;
}
}
} catch (Exception e) {
Logger.log("Drawing thread terminated!", Logger.MESSAGE);
}
}
private class Mod extends GenericVertexController {
private static final long serialVersionUID = 1L;
public void apply() {
SimpleVector[] s = getSourceMesh();
SimpleVector[] d = getDestinationMesh();
for (int i = 0; i < s.length; i++) {
d[i].z = s[i].z - (10f * (FloatMath.sin(s[i].x / 50f) + FloatMath.cos(s[i].y / 50f)));
d[i].x = s[i].x;
d[i].y = s[i].y;
}
}
}
private void blitNumber(int number, int x, int y) {
if (font != null) {
String sNum = Integer.toString(number);
for (int i = 0; i < sNum.length(); i++) {
char cNum = sNum.charAt(i);
int iNum = cNum - 48;
fb.blit(font, iNum * 5, 0, x, y, 5, 9, true);
x += 5;
}
}
}
public Object3D PickObj(float x, float y){
SimpleVector position = new SimpleVector(Interact2D.reproject2D3D(world.getCamera(), fb, (int) x,(int) y));
Object[] result = world.calcMinDistanceAndObject3D(world.getCamera().getPosition(), position, 10000F);
return (Object3D)result[1];
}
}
}
It might help if you actually do what i had written: Clear with alpha(!) and remove that blit-call.
Sir
Actually I am no getting you....Would you please explain how can i Clear with alpha(!) ???
Do u mean this??
RGBColor transp = new RGBColor(0,0,0,0);
//fb.clear();
fb.clear(transp);
world.renderScene(fb);
world.draw(fb);
//fb.blit(fb.getPixels(), this.widht, this.height, 0, 0, this.widht, this.height, this.widht, this.height, true);
blitNumber(lfps, 5, 5);
fb.display();
Yes, something like that. I've never used this myself, so i'm not sure what else you have to do (Android-side) to get it working, but on the jPCT-side, this should be all that is required. And it works: http://www.jpct.net/forum2/index.php/topic,1542.60.html (http://www.jpct.net/forum2/index.php/topic,1542.60.html)
EgonOlsen
Thanks for your reply sir......
But I haven't got any idea how to do this..
i am feeling frustrating...can u please help me out.
As said, i never did this myself, so i'm not a great help here. But this thread is all about doing what you want to do, so i guess the answer lies somewhere within here. I suggest to take the code and xml-snippets that dl zerocool posted at the beginning of the thread, make an Activity from that alone and see what happens. Your code seems to be a mix of my example and zerocools stuff and i assume that you are simply missing some important step...but it's easier to find that if you start simple and clean IMHO. For example, you are still doing this:
protected boolean isFullscreenOpaque() {
return true;
}
....which is fine for my example, but obviously makes no sense when trying to create a view that isn't opaque. I don't know if this is the root cause of the problem though.
Edit: BTW, is your mail-addr really ..@gmal.com (not gmail.com?). I keep getting bounces from my server regarding this address. Please correct it, if it's wrong. Thanx.
Hi sir I have got another problem.....
I am not able to rotate camera with the android sensor orientation.....
I have done this
SensorManager.getRotationMatrix(RTmp, I, grav, mag);
Rt=RTmp;
tempR = world.getCamera().getBack();
tempR.setRow(0, Rt[0], Rt[1], Rt[2],0);
tempR.setRow(1, Rt[3], Rt[4], Rt[5],0);
tempR.setRow(2, Rt[6], Rt[7], Rt[8],0);
tempR.setRow(3, 0, 0, 0,1);
world.getCamera().setBack(tempR);
Can you please give me some idea how to do this??
Thanks in advance.............
No idea what's actually in that rotation matrix from the sensor. It might help if you do some test rotations and post the content of the resulting matrix. Most likely thisis either a row/column major issue or caused by differences in the coordinate system. What's the actual result if you execute your code?
sir it shows only a black screen
pritom057 - is that my code your using? What variables did you feed into RTmp, I, grav and mag?
In the end I solved my problem with the following code;
switch (s_ev.sensor.getType()) {
case Sensor.TYPE_ACCELEROMETER:
System.arraycopy(s_ev.values, 0, mGravs, 0, 3);
break;
case Sensor.TYPE_MAGNETIC_FIELD:
System.arraycopy(s_ev.values, 0, mGeoMags, 0, 3);
break;
default:
return;
}
if (SensorManager.getRotationMatrix(mRotationM, null, mGravs, mGeoMags)){
// Rotate to the camera's line of view (Y axis along the camera's axis)
SensorManager.remapCoordinateSystem(mRotationM, SensorManager.AXIS_X, SensorManager.AXIS_Z, mRemapedRotationM);
SensorManager.getOrientation(mRemapedRotationM, mOrientation);
SimpleVector cameraVector = new SimpleVector();
cameraVector.x = mOrientation[1];
cameraVector.y = mOrientation[2];
cameraVector.z = mOrientation[0];
myworld.setCameraOrientation(cameraVector);
}
setCameraOrientation() leads too;
public void setCameraOrientation(SimpleVector xyzAngles)
{
Camera worldcam = world.getCamera();
worldcam.getBack().setIdentity();
float Z = xyzAngles.z;
float Y = xyzAngles.y;
float X = xyzAngles.x;
worldcam.rotateCameraAxis(new SimpleVector(0,1,0), -Z);
worldcam.rotateCameraAxis(new SimpleVector(1,0,0), X);
worldcam.rotateCameraAxis(new SimpleVector(0,0,1), -Y);
}
I still think the setBack() method should be possible, and quicker too, but this way at least works.
Thanks Darkflame
Ya I am using your code...And now it's working as u mentioned in last post...
But I am also not able to use setBack() function.
thanks....
Thanks a lot for your initiated ideas, specially for dl.zerocool and all jPCT team. Great !!!
I'm able to overlay the 3D object on the camera now. Quite fast on my HTC wildfire 2.1.
I have tried many 3D engine. jPCT is the fastest among other engine, rendering and loading.
Nice jPCT engine
Nice team developer.
Many Thanks.
Quote from: gman on October 19, 2010, 04:40:00 AM
Thanks a lot for your initiated ideas, specially for dl.zerocool and all jPCT team. Great !!!
I'm able to overlay the 3D object on the camera now. Quite fast on my HTC wildfire 2.1.
I have tried many 3D engine. jPCT is the fastest among other engine, rendering and loading.
Nice jPCT engine
Nice team developer.
Many Thanks.
Have you tried libGDX framework ?
Is it fast than jPCT ?
They are hardly comparable IMHO. I had a discussion about some kind of coorperation with the libGDX guys and i'm still thinking of using their buffer related stuff as an option for jPCT-AE once vertex updates become a problem. However, i'm not sure if this is the right place to ask this question... ;)
Hi everyone.
This example use FrameLayout to display 2 layout, but they are still in the same activity. How can we make the same thing with 2 activity (the top one has transparent background)?
??? ??? ???
In my project, i display some 3D models with GLSurfaceView and i use a transparent background SurfaceView on the top to display a small 2D picture on the screen. The problem is the 2D picture cannot display when all 3D models were displayed. If i set invisible for some 3D models, the 2D picture is showned. Is it a memory or framebuffer problem?
No idea. Personally, i feel that combining OpenGL output with some GUI components is asking for trouble, but maybe i'm just a bit old fashioned here. Do you have a screen shot that shows the problem?
Hi EgonOlsen
i checked my project again. Now, i will descript more detail:
- I used FrameLayout to display a SurfaceView and a GLSurfaceView with SurfaceView is on the top.
- On my test project, everything was fine.
- On my main project, everything works, except SurfaceView cannot display on the top. The differrent thing bettwen the test project and the main project is in the main project , glSufaceView must take a long time to load resources. When i try to set transparenct background for glSurfaceView, i can see a part of SurfaceView at black color areas on glSurfaceView.
- The main project is fine on emulator but on IS03, Nexus I
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
SCREEN_WIDTH=getWindowManager().getDefaultDisplay().getWidth();
SCREEN_HEIGHT=getWindowManager().getDefaultDisplay().getHeight();
requestWindowFeature(Window.FEATURE_NO_TITLE);
this.getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
fLayout= new FrameLayout(getApplicationContext());
glSurfaceView = new GLSurfaceView(getApplication());
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glSurfaceView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
renderer= new RendererImplement();
renderer.loadTextures(getResources());
glSurfaceView.setRenderer(renderer);
screenSurface= new mySurfaceView(getApplicationContext());
screenSurface.getHolder().setFormat(PixelFormat.RGBA_8888);
fLayout.addView(screenSurface);
fLayout.addView(glSurfaceView);
setContentView(fLayout);
}
uhm, I think this is an unknown error when put a surfaceView on the top of a GLSurfaceView. I used a View instead of SurfaceView and eveything is ok with the same code! ??? :-\
I saw some really nice AR demos this summer at Uplinq. Curious to see if many game devs will utilize this technology especially as device hardware quality/performance improves.
I am thinking of creating an AR-based game next year once I launch my first game.
BTW, its nice to finally be on this forum. Looking forward to developing with JPCT-AE
Is there a way to play video to a Texture using the Android MediaPlayer API?
// Load video
MediaPlayer mediaPlayer = MediaPlayer.create(m_context, R.raw.videofile);
// Somehow attach the output to a texture
SurfaceHolder sh = ??? (Texture Surface)
mediaPlayerVid.setDisplay(sh);
// Play video
mediaPlayer.start();
I'm not sure. If you can make a texture a SurfaceHolder, then maybe...
I can't seem to find a way to do it, at least efficiently.
GLSurfaceView has a getHolder() function that retrieves the SurfaceHolder.
Perhaps MediaPlayer can render to the OpenGL surface directly?
I tried using mGLView.getHolder() but it crashes. I have yet to check the exception error string but it appears that you need to use callback functions since the GL object runs in a different thread.
How to make the 3D model transform correctly with AR marker?
I can get the changing transform matrix,I want to make this transform matrix work with my jpct-ae model.
how to do?do with camera position matrix or model itself transform matrix?
This thread contains code to setup your Activity at the beginning as well as code that sets the rotation of the world based on the sensor data (http://www.jpct.net/forum2/index.php/topic,1586.msg11938.html#msg11938 (http://www.jpct.net/forum2/index.php/topic,1586.msg11938.html#msg11938)). It should actually be sufficient...
what's different between 3d model transforming and world camera transforming? ???
Object/model space is the space in which the object itself is in after loading. World space is the space, in which it will be transformed based on it's rotation/translation/... matrices and camera space is the space where it's positioned relative to the camera assuming that the camera is fixed at 0,0,0 looking down z. Think of a camera transformation not as if you were actually moving your head around but as if the world moves around your head, because that's what basically happens in 3D graphics.
A quick query, or a bug...!
I am doing some of the same as within this thread, but have opted for using the AAConfigChooser as it makes my scene look a lot nicer, but I've lost the transparency so therefore cannot see the camera preview (except on areas of textures!)
In the SurfaceView;
setEGLContextClientVersion(2);
setEGLConfigChooser(new AAConfigChooser(this));
getHolder().setFormat(PixelFormat.RGBA_8888);
And then within the renderer onDrawFrame();
frameBuffer.clear(new RGBColor(0,0,0,0));
theWorld.renderScene(frameBuffer);
theWorld.draw(frameBuffer);
frameBuffer.display();
Does AAConfigChooser() support transparent background, and if so, how?
Incidentally, I have tried PixelFormat.TRANSLUCENT within the SurfaceView to no effect.
thanks.
The AAConfigChooser doesn't specify an alpha value for the config, which means that it defaults to 0. I can change this and give you an updated version to try later.
Awesome, thanks Egon. Let me know and I'll test.
Please try this jar: http://www.jpct.de/download/beta/jpct_ae.jar (http://www.jpct.de/download/beta/jpct_ae.jar). It has an additional constructor that takes a boolean to enable/disable alpha. I haven't tested it, i'm just setting the configuration and hoping for the best...
Brilliant - thats done it!
The final setup within the SurfaceView is...
setEGLContextClientVersion(2);
setEGLConfigChooser(new AAConfigChooser(this, true));
getHolder().setFormat(PixelFormat.TRANSLUCENT);
//setEGLConfigChooser(8, 8, 8, 8, 16, 8);//Tablet compatible (RETEST)
// Setup the renderer
String inC = Globals.App_Prefs.getString("BackgroundColour", "0,0,0,0");
RGBColor background = Utilities.decodeColour(inC);
mRenderer = new JpctRenderer(background);
setRenderer(mRenderer);
All I have to do now is align the objects to the world without billboarding!
Cheers.
[attachment deleted by admin]
Might help to link to this thread: http://www.jpct.net/forum2/index.php/topic,2461.msg18413.html#msg18413 (http://www.jpct.net/forum2/index.php/topic,2461.msg18413.html#msg18413)
Egon,
Unfortunately some test users (including myself) are getting an IndexOutOfBounds exception within the AAConfigChooser() now.
This code
setEGLContextClientVersion(2);
if (Main.isTablet()) {
setEGLConfigChooser(8, 8, 8, 8, 16, 8);
} else {
setEGLConfigChooser(new AAConfigChooser(this, true));
}
getHolder().setFormat(PixelFormat.TRANSLUCENT);
has been working fine for months, but now running it on lesser devices to my Galaxy SII results in the following stack trace on start-up:
FATAL EXCEPTION: GLThread 9
java.lang.ArrayIndexOutOfBoundsException
at com.threed.jpct.util.AAConfigChooser.chooseConfig(AAConfigChooser.java:131)
at android.opengl.GLSurfaceView$EglHelper.start(GLSurfaceView.java:918)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1248)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1118)
I must add that I have had to download a more recent beta of jPCT-AE as my dev environment died and I had to re-acquire several libraries - I don't know if this would make a difference...
Any thoughts/ tips would be great.
Seems like a flaw...when no matching config was found, it should choose the first one....but it chooses the -1th one instead, which fails. This version should fix this: http://jpct.de/download/beta/jpct_ae.jar (http://jpct.de/download/beta/jpct_ae.jar). However, i've no idea what this first config actually will be and if it fits...
Thanks Egon, that's done the trick.
As an aside, you wouldn't have a good bit of code for working out the actual available memory for the running application? (Just wonder if you do anything already in jPCT in terms of memory checking)
Thanks again,
Mike
Thanks for the helpful tips! I compiled a list of some top resources I found around this topic of 3D Android augmented reality. I included this post. Check it out. I hope it can be useful to other developers here. :) http://www.verious.com/board/Giancarlo-Leonio/3d-android-augmented-reality/
Quote from: dl.zerocool on May 11, 2010, 08:06:35 PM
First we need to set up an XML layout.
Our minimum requirement is a glSurfaceView that's where we will draw 3D(JPCT engine),
and a SurfaceView to draw the camera preview.
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="fill_parent"
android:layout_height="fill_parent">
<android.opengl.GLSurfaceView android:id="@+id/glsurfaceview"
android:layout_width="fill_parent" android:layout_height="fill_parent" />
<SurfaceView android:id="@+id/surface_camera"
android:layout_width="fill_parent" android:layout_height="fill_parent"
android:layout_centerInParent="true" android:keepScreenOn="true" />
</FrameLayout>
I'm experimenting with this idea, very useful post:)
what I dont get is, in this layout camera view should be on top of GL view, not the other way around. but it's not ??? furthermore if I switch the order, GL view disappears ::)
what am I missing?
Unfortunately I can't answer your question directly (as to why), but I struggled for sometime getting a stable implementation whereby the AR data was always on top of the camera, especially whilst toggling the camera preview on and off.
I always implemented this in code, not using a layout as follows;
In the main activity onCreate method, in this order;
Create the CameraPreview.
Create the SurfaceView (with renderer)
In the CameraPreview constructor;
RelativeLayout tmpLayout = new RelativeLayout( appContext );
tmpLayout.addView(this, 0,
new LayoutParams(
LayoutParams.FILL_PARENT,
LayoutParams.FILL_PARENT)
);
activity.setContentView(tmpLayout);
In the SurfaceView constructor;
setEGLContextClientVersion(2);
setEGLConfigChooser(new AAConfigChooser(this, true));
getHolder().setFormat(PixelFormat.TRANSLUCENT);
// Setup the renderer
mRenderer = new JpctRenderer( mainARView.orientationListener.tabletMode );
Activity activity = (Activity)mainARView.appContext;
// Add this view to the main activity
setZOrderMediaOverlay(true);
activity.addContentView(this, new LayoutParams(
LayoutParams.FILL_PARENT,
LayoutParams.FILL_PARENT));
If you want a UI on-top of both, do this in the main activity after creating the SurfaceView;
uiScreen = new UserInterfaceView(this);
addContentView(uiScreen, fillLayout);
In onResume;
camScreen.start();
augScreen.setVisibility(View.VISIBLE);
augScreen.onResume();
In onPause;
augScreen.setVisibility(View.INVISIBLE);
augScreen.onPause();
camScreen.stop();
Hope this helps.
M
thanks:) I guess making layout either from xml or code ends up with same thing.
I believe the right way of doing this is, somehow making camera render to GLSurfaceView. anyone did this?
You can do this using the Vuforia SDK - That provides the image recognition and camera rendering via one 'renderFrame()' call which goes in the onDrawFrame method along with Jpct.
There's a pretty good article on the Wiki which I followed (mostly) to do just that.
yes, I saw that wiki page. actually rendering both into the same surface idea came from there.
but I cannot use it. is it open source? rendering code may help
The QCAR/ Vuforia is not OpenSource, but it is free to use in private and commercial apps; https://developer.vuforia.com/legal/license/2-8
Which bit can't you use (commercial reasons or technical?)
frameBuffer.clear(background);
/* Call our native QCAR function to render the video
* background and scan images for recognition */
renderFrame();
// Render the AR data to the jPCT scene
theWorld.renderScene(frameBuffer);
theWorld.draw(frameBuffer);
// Display our content...
frameBuffer.display();
I wasnt aware Vuforia is free for commercial apps :)
Quote
Which bit can't you use (commercial reasons or technical?)
both I suppose, our pipeline depends on OpenCV based detection. OpenCV's Android library renders onto Canvas (ie: SurfaceView not GLSurfaceView) and here I'm to merge them into one ;)
Wasn't aware OpenCV had an Android implementation.. Have to look that up when I have the change.
Can't help with the canvas much, but with Vuforia you could always nobble the part that actually checks for Trackable's and just utilise the camera preview segments, although possibly a bit heavy handed.
JNIEXPORT void JNICALL
Java_com_augtech_awilaSDK_graphics_JpctRenderer_renderFrame(JNIEnv* env, jobject obj) {
jclass activityClass = env->GetObjectClass(obj); //We get the class of our graphics engine
// Clear color and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Get the state from QCAR and mark the beginning of a rendering section
QCAR::State state = QCAR::Renderer::getInstance().begin();
// Explicitly render the Video Background
QCAR::Renderer::getInstance().drawVideoBackground();
Hi,
I'm working in a Augmented Reality project using jPCT-AE as the rendering engine.
I need to load dynamically on the camera surface a 3D Object from a data buffer (memory) not from the raw or assets directory.
Is it possible?
Thanks in advance
sure, jPCT's Loader (http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Loader.html) class works on InputStream's, it does not care about the source of the stream.
Thanks for your answer but
what about the texture images?
I dont want to load them from raw or assets folder.
I want to load them from the memory (data bufffer).
Thanks in advance.
yes, it's also possible. I've used camera and media player's output as texture of a 3d object by using GL_TEXTURE_EXTERNAL_OES.
have a look at this thread:
http://www.jpct.net/forum2/index.php/topic,3794.0.html
In the project i have to show dynamically 3d objects on the surface of the camera.
The renderer engine is notified by AR engine that a new 3d Object is ready to be displayed on camera's surface.
The 3d object is stored by AR engine dynamically in a data buffer.
So the renderer engine reads from the data buffer the content, parses the content, and then must show the 3d object on camera's surface.
Do you think that this will work with jPCT-AE?
Before parsing the contents of the data buffer I must know the type of 3d Object (.3ds,.obj ...)?
I need to specify the location of texture images?
Thanks in Advance
if you mean rendering into android's camera view i guess that's not possible. some people (for example as in the beginning of this thread) put camera view and GLSurfaceView (the view jPCT renders into) on top of each other. IMHO, this is not the correct way. I take camera data and blit it into GLSurfaceView as background. then draw any 3d objects on top of that as I like.
somehere in the forum you can find information about that
Actually the 3D Objects are rendered on GLSurfaceView surface.
I use two surfaces: the camera and the GLSurfaceView surface one on top of other.
So what do you think , is it possible to load dynamically 3D objects on GLSurfaceView ?.
These 3d Objects are stored in a remote server and under some conditions the AR engine downloads them.
As i described before, the AR then fills a data buffer with the content of the downloaded 3D object and notifies the renderer engine.
The renderer engine then reads the contents of the data buffer and draws the 3D Object on GLSurfaceView.
I must know the specific type of 3D object (.3ds, .obj .. ) before reading and parsing the contents of data buffer?
What about the texture images? are not stored now in assets or raw folder.
Thanks in advance.
yes, you can do that. as i said jPCT works on streams, it doesnt care about the source of stream. for the type of the object, maybe AR engine puts another bit of data describing its type. inspecting first bytes of stream may also help
Thanks for the help.
I'm sure your answer will be usefull to complete the project.
Hi,
I want to blit a captured image from camera to a FrameBuffer object in the onDraw() method.
On top of this image i want to display some 3d objects.It is an Augmented Reality application.
What is the best way to blit a captured image from camera to FrameBuffer?
The faster way?
Thanks in advance