Argh! setScale collisions!

Started by mystara, May 16, 2008, 08:52:29 PM

Previous topic - Next topic

mystara

Okay!

I have a 3DS model of a hollowed out area.

I've imported it at scale 1 and exported it to XML (file1).
I've then taken the same model, imported it at scale 17 and exported it to XML (file 2).

If I open file2, everything behaves as it should. I can move my camera around and checkCameraCollisionEllipsoid produces something sensible when I collide with things.
I can produce exactly the same appearance/view if I import file1 and use .setScale(17) on the resulting model. However, if I set the camera to exactly the same location, checkCameraCollisionEllipsoid continually detects collisions. Even when there's nothing visibly colliding with the camera.

I'm not quite sure why this should happen, am I doing something wrong with setScale? Do I need to do something else after calling it? The only thing that I can think of that might cause this problem is that the original model of file1 is still present, even after calling setScale(17) and only the resulting modification of the model after calling .setScale is visible.

Any clues? It's really hurting my brain :(



EgonOlsen

Do you have lazy transformations (http://www.jpct.net/doc/com/threed/jpct/Object3D.html#enableLazyTransformations()) enabled on that object? If so, disable them and see if that helps. If it doesn't, i'm afraid i don't know what's going on there. The scaling is taken into account when doing collision detection. I've just tested this again and it works. A small, compilable test case would be helpful.

mystara

I've tried disabling lazy transformations both before and after I set the scale, with no effect.

Unfortunately, my code is all too interconnected for me to be able to provide a test case as I'm using a client/server model with rendering done on a client and collision detection done on a server.
I could send you the two versions of the XML file I'm using. Maybe I've done something silly in constructing them.
I also have the code for the construction of the model itself. But it's abstracted slightly. In pseudo code it looks rather like this:

Loader.loadSceneFromXML(s, state.getWorld());
For all objects in the world:
1) Convert to triangle strips
2) Set collision mode to detecting collision with others and activate optimization
3) Set up octrees

Build world
Generate camera
Set camera position
Get the structure by name (in the world)
SetScale on the structure

<ponder> Maybe build world should come AFTER the scaling?

And to clarify, when I do this, collisions are immediately being detected by camera.checkCameraCollisionEllipsoid even though there are no visible collisions present. In response to this, my camera drifts upwards (which I would expect to happen in my collision system) until the camera elipsoid is (I would guess) more or less exactly reseting on top of the model.
To make it work correctly, all I do is load a different XML file (same model, but with the scale already set) and don't use .setScale. The camera is still in exactly the same location, everything looks identical and collisions are only detected when they ought to be detected.


EgonOlsen

The order of build() and setScale() doesn't matter. setScale() just sets a single value that changes the transformation. It doesn't depend on the object being build. I've rechecked my collision code: It does take the scaling into account (actually, there is no way around it, because it just uses the inverted world transformation which includes the scaling). Are you 100% sure, that there isn't some enableLazy... in your code anymore?
If there isn't, you have two options:

1. See if it helps to make the scaling permanent, i.e. load the model, setScale(<your scale>); rotateMesh(); setScale(1); and see, if that helps.

2. Use the xml to create a test case that just loads the xml and moves the camera around with collision detection. Shouldn't be too hard to mimic at least that part of your application in a small example.


mystara

Firstly, I've grepped for enableLazyTransformations.
The only reference I can find is in my program that generates the XML file.
In my other program which uses collision detection, my code specifically disables lazy transformations before setScale is run. Code is like this:

getWorld().getCamera().setPosition(new SimpleVector(-11, -36, -40));
getWorld().setAmbientLight(50,50,50);
model = getWorld().getObjectByName("model");
model.disableLazyTransformations();
model.setScale(17);

It's possible that lazyTransformations were enabled when the XML file was originally created. Is that likely to cause these problems?

In answer to your other questions:

1) Yes!
This fixes the problem. Collision detection doesn't misfire. However, moving my camera around causes fragments of the model to appear and disappear. I assume this is just a temporary feature of the rotateMesh being applied, so I'm not too worried about it.

2) Let me know whether the above information suggests anything to you. If not, I'll see if I can modify the FPS demo to produce the same effect. I'm not quite sure if it'll be possible though.

EgonOlsen

About the fragments that disappear: Are you creating the octree before doing the rotateMesh()? That would explain it. If so, switch this and it should look ok.

I can't see anything wrong with what you are doing, but there's one last test, that you could do to see, if the transformations are really fine:
Call http://www.jpct.net/doc/com/threed/jpct/Object3D.html#getWorldTransformation() before the scaling, after the scaling and set the scaling to something different and call it again. See if the matrices differ. If they are all the same, something is wrong. You can simply print out the Matrix, it has a toString().

mystara

Phew, okay...

I tried getWorldTransformation() before and after I did setScale and the matrices are different.
Matrix before:
(
   1.0   0.0   0.0   0.0
   0.0   1.0   0.0   0.0
   0.0   0.0   1.0   0.0
   0.0   0.0   0.0   1.0
)

Matrix after:
(
   17.0   0.0   0.0   0.0
   0.0   17.0   0.0   0.0
   0.0   0.0   17.0   0.0
   6.0906405   33.543255   137.59756   1.0
)

Regarding the fragments that disappear. You were right, I was creating the octree before doing the rotatemesh. I have switched this around and the structure looks okay, although the lighting is different. I imagine this is still due to the order in which I am doing things?

mystara

Hurray!

Using the FPS demo as a base, I've been able to create a reproducable test case. I will email it to you.



EgonOlsen

Ok, i can verify the problem. I've extended your example with a kind of ramp that i can scale up and down and see what happens. I don't have a 100% explanation of what goes wrong here...just an educated guess. Scaling is taken into account...actually. In my ramp-example, the ramp has a scale of 20. This doesn't work too well. It starts to work fine, when i set the scale to 2.5f and scale the mesh itself by 8 before (using rotateMesh()). From all i've done and from all the debuging, i assume that it's an accuracy problem. Collisions are detected in ellipsoid space, which is a kind of scaled object space, so that the ellipsoid in that space is a unit sphere. The results are being transformed back into world space after the process. You model's dimensions are quite small considering that this is a large cave. The cave's height is only 10 units. When applying the inverse transform, it's like as if the cave would be 10/17 units high, which is 0.59...already quite small. Now, this is transformed to ellipsoid space, which means a division (in height) by PLAYER_HEIGHT/2=15. So your cave in ellipsoid space is like 0.039 units high. That's a simplified explanation of the process, because the first scale is actually being applied to the ellipsoid's attributes (like position and direction), not to the cave, but it's safe to say that for the ellipsoid, the cave seems to be 0.039 units high....which is not much and considering the limited accuracy of floating point math, this *may* result in this problem.
There is only one solution: Don't do it. Load the model and use rotateMesh() instead. The different lighting in your example comes from the fact that the outcome of the rotateMesh() and the loading of "alreadyScaled.xml" are different, because their positions in world space aren't the same. Make sure, that they are and you should be fine.

You can see the difference in position by looking at the bounding box:


level.build();
for (int i=0; i<level.getMesh().getBoundingBox().length; i++) {
      System.out.println(level.getMesh().getBoundingBox()[i]);
}


I'm sorry that i can't come up with another solution, but don't see any... :'(

mystara

My knowledge of JPCT and 3D are very limited, but I think I understand the problem here. What you're saying seems to make sense and would explain why scaling my unscaled model (via .setScale) to large values doesn't work.

The only thing I don't understand is why the problem doesn't occur when I load the already scaled model. Surely the same transformation to ellipsoid space occurs and the same accuracy error should occur?

Isn't there some way to load the unscaled model and somehow transform it to being exactly the same as the scaled model would be? Or is this what using rotateMesh() will do?

I'm not entirely sure how the positions in worldspace are different for my two XML files. Each of them loaded the same 3DS file. One imported it at scale 1f (and exported to XML) and the other imported it at scale 17f (and exported to XML). The one that is imported to scale 1f then gets set to scale by .setScale, so all the parts of the model should surely be in the same place?

I know I'm wrong, because my 3D knowledge simply isn't that great. I just don't understand why it's wrong :D

EgonOlsen

When loading the scaled model, the model isn't 10 units height but 10*17. Plus the 1/17-transfrom from the scaling doesn't apply. Only the 1/15 from the ellipsoid applies, i.e. you work with a height of around 11 in ellipsoid space instead of 0.039 as you would do with the smaller model.

rotateMesh() makes the current rotation matrix permanent by changing the actual mesh data. Scaling is part of the rotation process, which is why this works for scaling too. An example: Your mesh has a single vertex (1,2,3). This is the value of the vertex in object space no matter which scaling you set for the model. But when you call rotateMesh(), the vertex will be changed to (1*scale,2*scale, 3*scale). That's why you have to reset the scaling afterwards, because you would get x*scale*scale in world space otherwise.

I don't know exactly why both approaches differ in world space after loading. I think it has something to do with the origin that is part of the xml. Try to load your model, build it, set origin to (0,0,0) and save it as xml. That should give you a "clean" model with origin at (0,0,0) in the xml. Load that, scale it, apply rotateMesh(), set scale back to 1, build it.

mystara

Okay, I've been struggling with this for hours. At first I didn't notice that the XML file exports light sources (D'oh).

Everything is perfect if I only use ambient light. The structure, positions, and light are all correct.

However, if I add a light source, it appears significantly more powerful when rotateMesh() has been applied and it doesn't seem to matter whether the light itself is created and added to world before or after rotatMesh() is called.

I have uploaded a couple of screenshots to show what I mean:
<a href="http://alan.alwebwiz.net/withrotate.png">With_rotate</a> shows what happens if rotateMesh() is applied
<a href="http://alan.alwebwiz.net/withoutrotate.png">Without_rotate</a> shows what happens if it is not applied

I have tried to put the camera in almost exactly the same position. The actual locations and facings are shown in the corner. The only lines I have commented out are:

area.rotateMesh();
area.setScale(1.0f);






EgonOlsen

#12
You mean that you are using an already scaled model, scale it by 1 (or not at all), apply the rotateMesh() on this and the lights increase? Again, i can't verify this. I've done the same with your example code and nothing changes, the lighting stays as it was...which is what it supposed to do, because the mesh doesn't change by scaling it by one.
Maybe you are missing a build in the unrotated version or something?

Edit: Oh wait...i think that something like this may happen if a scaling is being set somewhere and the rotation matrix is set back to identity afterwards without setting the scaling back to 1 before...i'll check this out...yes, that has the same effect, because the normals are being rescaled in the lighting calculation with a scaling that actually isn't present in the matrix any longer. Are you by any chance doing something like this? Please try a Object3D.getScale() before doing the rotateMesh() and see what it gives.

mystara

I am using a 1f model, scaling it via .setScale(8)
Then when I do .rotateMesh(), the lights increase (I also do a .setScale(1) after this, but that seems to have no real effect).

I have tried printing the results of .getScale() immediately before and after I do .setScale() (which is immediately before .rotateMesh()). My code, and the corresponding output are as follows:

Object3D level = getWorld().getObjectByName("level");
level.disableLazyTransformations();
System.out.println("First scale test is: " + level.getScale());
level.setScale(8.0f);
System.out.println("Second scale test is: " + level.getScale());
level.rotateMesh();
level.setScale(1.0f);


Output:

First scale test is: 1.0
Second scale test is: 8.0

EgonOlsen

#14
Have you tried adding a build() (or at least a calcNormals()) after the setScale(1);? The rotateMesh() also rotates the normals, in this case, it scales the normals...this might be considered to be a bug, because it's actually bogus to scale a normal. But a build/calcNormals should fix this.