Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - angelo

#1
The code is already there... in fragments ;)
#2
Hi,

Im a bit of a beginner with jpct-ae.  Im just having a problem with the 3D model that loads up and blocks the camera. Could it be because of the size?  Im trying to integrate it with vuforia. if the model is small i dont have a problem.  The thing is, if it didnt find the image target from vuforia yet, then the 3D model shouldnt show up yet, please correct me if im wrong.

it only becomes a problem if i change the size using mainBody.scale(2);

If im wrong could you help me with the code to remove or delay the 3D object from showing up if it didnt find the image target yet.

THanks so much!

------------------------------------------------------------------------------------------------------------

Solved it. but if someone has a better way, please share. Thanks


//ImageTargets.cpp
JNIEXPORT void JNICALL
Java_com_theoretics_ToyotaAR_ImageTargetsRenderer_renderFrame(JNIEnv *env, jobject obj)
{

const QCAR::CameraCalibration& cameraCalibration = QCAR::CameraDevice::getInstance().getCameraCalibration();
QCAR::Vec2F size = cameraCalibration.getSize();
QCAR::Vec2F focalLength = cameraCalibration.getFocalLength();
float fovyRadians = 2 * atan(0.5f * size.data[1] / focalLength.data[1]);
float fovRadians = 2 * atan(0.5f * size.data[0] / focalLength.data[0]);
float trackableIndex = 0;

jclass activityClass = env->GetObjectClass(obj);
jfloatArray modelviewArray = env->NewFloatArray(16);
jmethodID updateMatrixMethod = env->GetMethodID(activityClass, "updateModelviewMatrix", "([F)V");

jmethodID fovMethod = env->GetMethodID(activityClass, "setFov", "(F)V");
jmethodID fovyMethod = env->GetMethodID(activityClass, "setFovy", "(F)V");
jmethodID trackableIDMethod = env->GetMethodID(activityClass, "setTrackableId", "(F)V");


// test
jclass newClass = env->GetObjectClass(obj);
jmethodID updateCameraMethod = env->GetMethodID(newClass, "updateCamera", "()V");

// Clear color and depth buffer
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Get the state from QCAR and mark the beginning of a rendering section
QCAR::State state = QCAR::Renderer::getInstance().begin();
// Explicitly render the Video Background
QCAR::Renderer::getInstance().drawVideoBackground();
// Did we find any trackables this frame?
for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
// Get the trackable:
const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
const QCAR::Trackable& trackable = result->getTrackable();
// const QCAR::ImageTarget& imageTarget = (const QCAR::ImageTarget&) trackableResult->getTrackable();
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());

if (strcmp(trackable.getName(), "pg1") == 0)
{
trackableIndex = 5;
}
else if (strcmp(trackable.getName(), "pg2") == 0)
{
trackableIndex = 6;
}
else
{
trackableIndex = 7;
}
}
QCAR::Renderer::getInstance().end();

for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
// Get the trackable:
const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
const QCAR::Trackable& trackable = result->getTrackable();
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());

SampleUtils::rotatePoseMatrix(180.0f, 1.0f, 0, 0, &modelViewMatrix.data[0]);
// Passes the model view matrix to java
env->SetFloatArrayRegion(modelviewArray, 0, 16, modelViewMatrix.data);
env->CallVoidMethod(obj, updateMatrixMethod , modelviewArray);
env->CallVoidMethod(obj, updateCameraMethod);
env->CallVoidMethod(obj, fovMethod, fovRadians);
env->CallVoidMethod(obj, fovyMethod, fovyRadians);
env->CallVoidMethod(obj, trackableIDMethod, trackableIndex);

}
if(state.getNumTrackableResults() == 0){
env->CallVoidMethod(obj, trackableIDMethod, 8);
}
env->DeleteLocalRef(modelviewArray);

}

//ImageTargetsRenderer.java
public void setTrackableId(float tid_) {
System.out.println("ANGELO TRACKABLE ID:" + tid_);
tid = tid_;
if (tid == 5.0f) {
innerCab.setVisibility(true);
seats.setVisibility(true);
} else
innerCab.setVisibility(false);
seats.setVisibility(false);
}



the innerCab and seats of the car was blocking the camera
#3
Thanks so much.  you've help is greatly appreciated. you're awesome
#4
thanks.

but should i throw my xyz coordinates, say(100, 150, 50) directly to the ballPos in  targetJoint(currentPose, 10, new SimpleVector(-1, 0, 0), ballPos, 0.4f);

what mathematical calculations do i use first?  You used a lot of "Quaternion" math thats currently eluding me right now.  :)

also would the skeletonhelper class be useful for my case? (again, all the math in there is still unfamiliar to me right now)  ???

thanks again.
#5
Im a noob and trying to use bones to mimic my movements in kinect.  Is there a way to send my shoulder, arms, and hands to the xy coordinates that im picking up from kinect?   ive been trying all sorts of things and nothings working correctly.