Only thing rendered is the SkyBox

Started by AGP, June 11, 2017, 02:11:43 AM

Previous topic - Next topic

AGP

And even that won't turn with the camera. The screen is being redrawn, the accelerometer data is being updated, and so on and so forth. onDrawFrame calls a loopIteration() method that only rotatesCameraY by a given amount, then redraws the screen. But the SkyBox doesn't rotate. It's the weirdest thing, but it should be a known issue (something that I'm doing wrong that's already been done). Right?

EgonOlsen

The only reason that I know for a thing like that is that you are using a display mode without a depth buffer. Try one of the ConfigChooser implementations that come with jPCT-AE and see if that changes something.
If it doesn't, add abn IRenderHook implementation to your objects and override the beforeRendering() method with some debug output to see if your objects are actually processed at all.

AGP

I was using old OpenGL 1 code (which works with other apps). By switching to GLES2 I solved it...

Now, how do I do a view per eye for VR? Two FrameBuffers and two cameras?

EgonOlsen

Must be some driver issue then...no idea what's going on. Anyway...

For VR, there's this: http://www.jpct.net/wiki/index.php?title=Example_for_Google_Cardboard or maybe this: http://www.jpct.net/forum2/index.php/topic,4814.msg32913.html#msg32913. You can't use two FrameBuffer instances at a time, because they are bound to the context and there can be only one. But you can use multiple render targets, which is basically the same thing. See the second link for a simple example that doesn't use any VR API, just pure jPCT-AE.

AGP

What's the purpose of the dummy world on the second example? And also, what's the purpose of drawing into the buffer and displaying for each eye, then blitting the textures into the frame buffer and displaying the buffer?

EgonOlsen

The dummy world is a hack for setting up the blitting context properly, when the actual frame buffer isn't used for any other output then 2d blitting. It might not be needed in current versions, so you can try to leave it out and see if that works too.

What the code does is: Render the left eye's view into a texture, render the right eye's view into another texture, blit the resulting textures into the actual frame buffer. That's the easiest way, because otherwise, you would have to render both views into the frame buffer and that's not really possible without a lot of hassle...if even...

AGP

Before I asked I tried without the world, and it crashes. But why do you have to display between painting the textures?

EgonOlsen

It won't crash without the dummy world, it just might blit wrongly. It will crash without the display calls, because that will cause an OOM after some time. The display call displays the results in the render targets, not on screen.

AGP

OK, but then are the final renderScene and draw calls needed (doesn't doing it three times seem redundant and expensive?) just before you display to the screen?

AGP

And actually, what's crashing (I just saw because logcat acts up too much) is the very first call to setRenderTarget. And the targets aren't null or anything.

EgonOlsen

No, you need all of them. If you omit one of them, the corresponding buffer (screen or texture) will never be swaped.

AGP

What do you think is causing setRenderTarget to crash?

Also, have you thought about a FrameBuffer3D for this very purpose?

EgonOlsen

No idea. Do you have a stack trace?

AGP

Yup:
Quote
06-19 09:25:42.881 30009-30159/ratto.co.vr_one E/AndroidRuntime: FATAL EXCEPTION: GLThread 2267
                                                                 Process: ratto.co.vr_one, PID: 30009
                                                                 java.lang.RuntimeException: [ 1497860742879 ] - ERROR: FrameBuffer: 1 has caused a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT exception
                                                                     at com.threed.jpct.Logger.log(Logger.java:206)
                                                                     at com.threed.jpct.GL20.checkFrameBufferObject(GL20.java:2101)
                                                                     at com.threed.jpct.GL20.setRenderTarget(GL20.java:2061)
                                                                     at com.threed.jpct.GLRenderer.setRenderTarget(GLRenderer.java:2146)
                                                                     at com.threed.jpct.FrameBuffer.setRenderTarget(FrameBuffer.java:287)
                                                                     at com.threed.jpct.FrameBuffer.setRenderTarget(FrameBuffer.java:249)
                                                                     at co.ratto.vr_one.MyRenderer.onDrawFrame(MyRenderer.java:145)
                                                                     at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1649)
                                                                     at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1354)
06-19 09:25:42.901 30009-30009/ratto.co.vr_one I/System.out: Gyro. Values: 0.0010652645, 0.0010652645, 0.0010652645
06-19 09:25:42.921 30009-30009/ratto.co.vr_one D/SensorManager: unregisterListener ::   
06-19 09:25:43.321 30009-30009/ratto.co.vr_one D/ViewRootImpl: #3 mView = null

EgonOlsen

Ok...and that happens in which case exactly? A code snippet would be helpful.