Vertices only visible in certain camera angles

Started by rkull88, May 17, 2021, 01:17:17 PM

Previous topic - Next topic

rkull88

Hi,

I'm rendering a surface (ground terrain) from a model. The model can exist of alot of triangles (> 1000000) so I'm using a dynamic object limited to the 10000 triangles closest to my center of view. This worked nicely when I used a vertex controller (by extending the GenericVertexController) although updating the vertex data was a bit slow. So I wanted to try another approach.

Looking at the code for vertex attributes, it looks like it's possible to update parts of the data at the time, which might give a bit smoother experience instead of updating the comlepete data set which sometimes give hickups. So I removed the vertex controller and implemented my own vertex shader with a dynamic vertex attribute for position. And instead of updating the vertices by calling object.getMesh().applyVertexController() (and object.touch()) I'm only updating the position vertex attribute. The goal was to reach the same result as before and then try to update bits of the data at the time. The problem is, now my surface is only visible at a certain camera angle range.. within the range, I can see the triangles of the surface exactly as before but if I rotate the camera around its view center, the triangles disappear...

Am I forgetting to update something? The shaders are really simple and do not rely on normals/light. Just another vertex attribute with the color.

Thanks!

rkull88

I'm inserting the world space position of the vertices as the position attribute and multiplying with modelViewProjectionMatrix.. maybe that's what's wrong? If so, is there a viewProjection matrix I can use instead since I would like to skip the model -> world transformation? Or maybe that's not how you do things :)

EgonOlsen

Quote from: rkull88 on May 17, 2021, 04:00:01 PM
I'm inserting the world space position of the vertices as the position attribute and multiplying with modelViewProjectionMatrix.. maybe that's what's wrong?
Yes, looks like it. Please have a look at the docs for the GLSLShader class here: https://www.jpct.net/jpct-ae/doc/com/threed/jpct/GLSLShader.html. It mentions various matrices that will be automatically injected into your shader if the matching uniforms are present. I think that you want uniform mat4 projectionMatrix in your case instead of the modelViewProjectionMatrix.

rkull88

Thanks for the response. Tried the projectionMatrix alone as well, didn't work.. I think that I also might need the view matrix along with the projection matrix (referring to this: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#the-model-matrix). But that's not available, right?

EgonOlsen

You mean the matrix that transforms from world into camera space? No, that's not present by default but you can add a uniform to your shader and inject it yourself. Something like this should work:


SimpleVector pos = camera.getPosition();
pos.scalarMul(-1f);
Matrix mat = new Matrix();
mat.translate(pos);
mat.matMul(camera.getBack());
...
shader.getUniform("viewMatrix", mat);


rkull88

Good news!
The viewMatrix and world coordinates works! :)

Using a vertex shader with world position "position" attribute and calculating gl_Position as
gl_Position = projectionMatrix * viewMatrix * position;
with "viewMatrix" being the calculated and injected one, updates the vertices properly and visibility is kept even though the camera is moved around, perfect!

Bad news!
The reason I wanted to do this was to not update my whole mesh in one single onDraw() call but instead, update parts of the vertices over several onDraw() calls to achieve a smoother result. That seems to be true for updating the VertexAttribute itself but false when it comes to world.renderScene() which takes pretty much the same time no matter if I update the whole mesh or parts of it.

Is that correct? Doesn't it matter how much you change a dynamic object when it comes to rendering that object?

EgonOlsen

The actual rendering will take the same amount of time, no matter now much of the object has been modified. If you are using a vertex controller, it requires a transfer of data from main to GPU memory that takes additional time. Your shader based solution should avoid that. Have you profiled your code to see exactly where the time is being spent? Also, you large is the object in question?

rkull88

Ok, so changing the whole thing at once using the shader might be my best option. Thanks!

I'm having a hard time profiling the application. Somethings wrong with Android studio I think cause my app is not showing up as debuggable process.. but I'll profile it and check as soon as I get studio to work properly.

Size of the object is 10000 triangles, but not reusing vertices because of a wireframe shader I'm using, so 30000 verts.

rkull88

QuoteThe actual rendering will take the same amount of time, no matter now much of the object has been modified

I have two dynamic objects with changing vertcies in my scene. With regards to this, would I benefit from trying to update them both in the same onDraw() when possible och should I try to update them in separate onDraw() calls to maximize the smoothness? (Or am I overthinking this and it doesn't matter..?)

EgonOlsen

Hard to tell. It depends on the application, I guess. It might feel smoother to have several smaller hick-ups than one larger one. In terms of the rendering itself, it doesn't matter.

rkull88

Now I got the profiling to work. Updating the vertex attributes instead of the vertex controller is indeed faster, however, when it comes to the rendering, updating the vertex attributes takes longer time in draw() and renderScene(). Don't know if these screencaps of the top down charts can give you any clues, but here they are: https://www.dropbox.com/sh/809ag9of4esvlig/AAB3OcqPMtVRw1fSQ0uuZCYra?dl=0

*_vert_attr = changing vertex position via vertex attribute
*_vert_controller = changing via vertex controller

From what I can see, the function fillAttributes() differs quite alot, especially the get() from ArrayList in the vertex attribute case.

Since I'm using the "position" attribute, is there any chance that stuff gets written/updated twice when it doesn't have to? I mean, the position attribute is written anyway right? Or is it some other magic going on under the hood? :)

Thanks alot for the support!

EgonOlsen

That the amount of work done in fillAttributes increases when you are using vertex attributes is to be expected. I'm more confused about the fact that there seems to be significant work done in the other case as well. Are you, even in case of using the vertex attributes, still assigning the vertex controller? Or do you have some explicit calls to compile(...) or maybe build(false)?

rkull88

In both cases, I'm still using vertex attributes, one static for drawing the wireframe based on barycentric coordinates and one dynamic for the coloring of the vertcies. The thing that differs is how the position of the vertices are updated.

In the vertex attribute case, I couldn't get the position update to work if I didn't do one initializing update using a vertex controller. But after that first update, it worked by only updating the position via the attribute.

I'm only calling build() on the Object3D

EgonOlsen

What performance difference are we talking about here? How much faster/slower is one solution than the other?