Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - kelmer

#1
Support / Re: integration with vuforia help
February 28, 2013, 11:47:39 PM
You call that on each frame, that is inside the onDrawFrame method, after vuforia's render loop.

I don't know what might be happening with the camera; no camera code is touched in the wiki tutorial, so that shouldn't be happening if you followed it.
#2
Well then, it's done.

Please take a look and do the pertinent corrections  :-\
#3
I have no problem adding it, I just fear users will then complain about not getting the proper results. I can add it as an "if" section in case anyone encounters any of these problems, is that okay? Also, they would need the build you provided me with. Should I link directly to that version?
#4
You mean by adding the part that sets the FB objects? If you don't change the render target then it works without that step, in fact if you do that step you get the wrong result.
#5
I'll reply with an image


it works :)

Thank you very much, I've tried a few engines and their support forums, and I think you are by far the most helpful developer ever  ;)
#6
If I set a vertical offset I get the plane rendered where it should be. But in the texture, it's rendered without the offset. (I can see this after applying the shader, which "crops" the rendering outside that plane).

The offset seems not to be applied when rendering to texture.
#7
Quote from: EgonOlsen on February 24, 2013, 08:12:10 PMmaybe you are using a different buffer size then? Anyway, i really don't think that using a framebuffer of that size will take you anywhere. Get a device with a much lower resolution and you'll see, what i mean...

No, I'm using the same buffer size... and I did try in a smaller device and the problem gets worse indeed. I'm baffled as to how to find a solution. Because if I use a smaller buffer size (i.e. 960x720, for a 4:3 ratio), I get only a portion of the screen rendered  and the plane gets smaller (although ONLY after rendering to texture, even using a 640x480 resolution works perfectly without that step, which keeps me puzzled). And using the whole screen resolution (1196x720) results in objects being deformed (due to the camera being 4:3 and thus the marker based on those proportions?).

I tried using the normalized values for the offsets, it does fix the problem in the display buffer although not for the texture. It does help moving further though. I will keep investigating.
#8
Well I finally got the time to struggle a bit with the problem and I think I finally understand the basics of it, with the help of your posts.

Vuforia's native code requests the camera image of my Galaxy Nexus at a resolution of 640x480. Then, to fill the actual display size of my device, it resamples this image taking the width of the screen (1196) as the baseline, which results in a 4:3 resampled video stream of 1196,897 (as opposted to the 1196x720, 16:10 ratio of the actual display).

What I don't understand is the renderToTexture process in JPCT-AE.

In order for both things to match, I create a framebuffer of 1196x897 size (hardcoding the values for this device, just for the sake of simplicity). I then create an NPOT texture of exactly the same size of 1196x897.


Then, on my onDrawFrame, I just render first to texture, then remove the render target and render to the screen. No further processing is done, just rendering first on texture, then on teh screen (I don't have any shaders, just a plane that gets rendered in both the texture and the display). This texture is not used at all anywhere on the code.

Doing this, objects get displaced. Yet if I don't render first to texture, my objects are placed correctly over the target.

Shouldn't the framebuffer (which is "shared" between the texture I render to and the display) be left untouched? I know the plane is untouched, I checked its position before and after rendering to texture and it's exactly the same. How come rendering first to a texture modifies the behavior of later rendering to screen?

Moreso, if I add some other objects to the world, those get displaced to, even if they aren't rendered to the texture.

This is a capture without the rendering to texture:

host images

And this is a screencap after rendering to texture:

image hosting sites

Anyway, following your chain of thought and your advice I tried accounting for this vertical displacement by resizing the fb again after rendering to texture then applying a vertical offset of 177 (897-720) but then I won't see anything on my marker anymore :(



#9
There isn't a resize method in the FrameBuffer object. Maybe it isn't supported by OpenGL ES either?
#10
Thanks, we'll give it a go. I would try to find out a proper solution though, and update it on the wiki.
#11
The camera image covers the whole screen. I checked and from what I can tell, the image looks the same as in the regular phone camera, without no apparent distortion. I am guessing that the image is zoomed and some camera value info is lost (i.e. clipped away) in the process, as you say, so the solution you suggest won't work, would it?

#12
Honestly, I don't quite get what's happening here "behind the scenes". We already get the fov/fovy values and set them up, but that does not do the trick.

If I create the framebuffer with the width and height from onSurfaceChanged, I get these stretching and shrinking of my objects, even after setting fov and fovy.

If I set the fb resolution values manually, then objects are displaced along x and y axis when I move the device, unless I also set the fov/fovy values, which is the only way they show up properly. I thought that settled it up temporarily at least, until we stumbled across this problem.





#13
What he said is right in my case too (we're kind of working together). If you read the width and height of the camera video stream on vuforia, you get those odd resolution values (the camera and the video screen don't alwayts share the same dimensions, apparently). If you just create the FB with the values provided by on SurfaceChanged, then the objects will get stretched or shrank depending on how you hold your device (landscape or portrait): you can see the objects stretching or shrinking as you turn the device.

If you fix those values to those provided by the camera, they show up as they should.
#14
Support / Re: Additional Clip Planes
February 07, 2013, 05:10:12 PM
Well that does seem like a good reason  ;D
#15
Support / Additional Clip Planes
February 07, 2013, 11:54:50 AM
Has this been removed from AE? I can't find the setClippingPlane() and removeClippingPlane() methods in the framebuffer object. Is there a good reason not to include them?