Augmented reality using JPCT-AE with OpenCV

Started by rhadamanthus, May 03, 2013, 05:25:11 AM

Previous topic - Next topic

rhadamanthus

Hello,
I am using OpenCV to calibrate the camera, which gives me a 3x3 matrix of intrinsic parameters
fx 0  ox
0  fy oy
0  0  1


with the actual values as follows:
966.64154, 0.0      , 477.89288
0.0      , 966.64154, 363.23544
0.0      , 0.0      , 1.0


I also use OpenCV to compute the location of cubes with respect to the camera, meaning that I don't need to move the camera, only the cubes (1 or 2 for now).

I see that I can convert an OpenCV matrix directly into a row-major float array and use that in JPCT to move the cubes.
However, I'm not sure how to use the camera parameters matrix. It seems that I can't set the projection matrix directly in the Camera class. And I'm not exactly sure how to convert those parameters into something that the Camera class understands (like FOV, for example).

Any hints?

EgonOlsen

What kind of matrix is that? What do f and o stand for?

rhadamanthus

#2
It's the intrinsic parameters of the camera. f is the focal length.
It seems to me that it's what opengl calls the projection matrix. However, from the computer vision lectures and books it's supposed to be multiplied by a 3x4 transform matrix (camera transform) to get the projection matrix.I guess it's a terminology difference.

I have a few other questions about JPCT, if that's ok.

  • is the default cube primitive axis aligned?
  • how are the axes aligned? y is up, x is right, and z is towards screen?

AugTech

You could always use the Android camera parameters to get FOV values and then apply these to the WorldCamera...

In your camera class

        Camera.Parameters cameraParams = mCamera.getParameters();
        float hva = cameraParams.getHorizontalViewAngle();
        float vva = cameraParams.getVerticalViewAngle();


and then in the render class

Camera worldCamera = theWorld.getCamera();
float wid = worldCamera.convertDEGAngleIntoFOV( hvAngle );
float hig = worldCamera.convertDEGAngleIntoFOV( vvAngle );
worldCamera.setFOV( wid );
worldCamera.setYFOV( hig );


I use this method and it works fine except when rotating the device to portrait - The values should really be re-calculate when the camera activity is rotated (like in Wikitude), but I haven't implemented that as yet  :)

To answer one of your questions, the axis description is at http://www.jpct.net/wiki/index.php/Coordinate_system

rhadamanthus

Thank you very much for the help. That was very helpful.
I apologize for not reading the documentation thoroughly enough. I'm way behind in the project and I didn't know where to begin.