Drawing XYZ axis/plane and section view

Started by jiarongkoh, December 31, 2014, 09:09:51 AM

Previous topic - Next topic

jiarongkoh

Hi all, I am working on an app that is capable of providing section views of a 3D model. This section view is indicated by a plane located on the XYZ axis of the model and the location is controlled by the user via a touch and scale capability on a XYZ axis. Ideally, what i wanna acheive is smth that looks like the one attached.

Any help in the approach in acheiving the two things is greatly appreciated:
1) an XYZ axis coordinate in the corner of the view
2) acheiving a section view of the 3D model

EgonOlsen

For drawing thin lines, there's the Polyline class, which can render line sets in world space. About the sections: It should be possible to do this in a shader (assuming that you don't want to do it on the geometry itself). I'll try to create an example if i find the time.

EgonOlsen

Here's a very brief example of how to cut slices out of an object in a shader. It's based on the HelloWorld example. Here's the Activity:


package jpct.threed.com.cliptest;

import android.app.Activity;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.view.MotionEvent;

import com.threed.jpct.Camera;
import com.threed.jpct.FrameBuffer;
import com.threed.jpct.GLSLShader;
import com.threed.jpct.Light;
import com.threed.jpct.Loader;
import com.threed.jpct.Logger;
import com.threed.jpct.Object3D;
import com.threed.jpct.RGBColor;
import com.threed.jpct.SimpleVector;
import com.threed.jpct.Texture;
import com.threed.jpct.TextureManager;
import com.threed.jpct.World;
import com.threed.jpct.util.ExtendedPrimitives;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;


/**
*
*/
public class ClipTest extends Activity {

    private GLSurfaceView mGLView;
    private MyRenderer renderer = null;
    private FrameBuffer fb = null;
    private World world = null;
    private RGBColor back = new RGBColor(50, 50, 100);

    private float touchTurn = 0;
    private float touchTurnUp = 0;

    private float xpos = -1;
    private float ypos = -1;

    private Object3D cube = null;
    private int fps = 0;

    private Light sun = null;

    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        mGLView = new GLSurfaceView(getApplication());
        mGLView.setEGLContextClientVersion(2);

        renderer = new MyRenderer();
        mGLView.setRenderer(renderer);
        setContentView(mGLView);
    }

    @Override
    protected void onPause() {
        super.onPause();
        mGLView.onPause();
    }

    @Override
    protected void onResume() {
        super.onResume();
        mGLView.onResume();
    }

    @Override
    protected void onStop() {
        super.onStop();
    }

    public boolean onTouchEvent(MotionEvent me) {

        if (me.getAction() == MotionEvent.ACTION_DOWN) {
            xpos = me.getX();
            ypos = me.getY();
            return true;
        }

        if (me.getAction() == MotionEvent.ACTION_UP) {
            xpos = -1;
            ypos = -1;
            touchTurn = 0;
            touchTurnUp = 0;
            return true;
        }

        if (me.getAction() == MotionEvent.ACTION_MOVE) {
            float xd = me.getX() - xpos;
            float yd = me.getY() - ypos;

            xpos = me.getX();
            ypos = me.getY();

            touchTurn = xd / -100f;
            touchTurnUp = yd / -100f;
            return true;
        }

        try {
            Thread.sleep(15);
        } catch (Exception e) {
            // No need for this...
        }

        return super.onTouchEvent(me);
    }

    protected boolean isFullscreenOpaque() {
        return true;
    }

    class MyRenderer implements GLSurfaceView.Renderer {

        private long time = System.currentTimeMillis();

        public MyRenderer() {
        }

        public void onSurfaceChanged(GL10 gl, int w, int h) {
            if (fb != null) {
                fb.dispose();
            }
            fb = new FrameBuffer(w, h);

            world = new World();
            world.setAmbientLight(20, 20, 20);

            sun = new Light(world);
            sun.setIntensity(250, 250, 250);

            Texture texture = new Texture(getResources().openRawResource(R.raw.airvent));
            TextureManager.getInstance().addTexture("texture", texture);

            cube = ExtendedPrimitives.createCube(20);
            cube.setTexture("texture");
            cube.setCulling(false);
            cube.strip();
            cube.build();

            GLSLShader shader = new GLSLShader(Loader.loadTextFile(getResources().openRawResource(R.raw.vertex)), Loader.loadTextFile(getResources().openRawResource(R.raw.fragment)));
            shader.setUniform("startX", -4f);
            shader.setUniform("endX", 4f);
            cube.setShader(shader);

            world.addObject(cube);

            Camera cam = world.getCamera();
            cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
            cam.lookAt(cube.getTransformedCenter());

            SimpleVector sv = new SimpleVector();
            sv.set(cube.getTransformedCenter());
            sv.y -= 100;
            sv.z -= 100;
            sun.setPosition(sv);

        }

        public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        }

        public void onDrawFrame(GL10 gl) {
            if (touchTurn != 0) {
                cube.rotateY(touchTurn);
                touchTurn = 0;
            }

            if (touchTurnUp != 0) {
                cube.rotateX(touchTurnUp);
                touchTurnUp = 0;
            }

            fb.clear(back);
            world.renderScene(fb);
            world.draw(fb);
            fb.display();

            if (System.currentTimeMillis() - time >= 1000) {
                Logger.log(fps + "fps");
                fps = 0;
                time = System.currentTimeMillis();
            }
            fps++;
        }
    }
}


...and you need the shaders. I used a modified default shader from jPCT-AE's distribution. Here comes the vertex shader:


uniform mat4 modelViewMatrix;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 textureMatrix;

uniform vec4 additionalColor;
uniform vec4 ambientColor;

uniform float alpha;
uniform float shininess;
uniform bool useColors;

uniform float fogStart;
uniform float fogEnd;
uniform vec3 fogColor;

uniform int lightCount;

uniform vec3 lightPositions[8];
uniform vec3 diffuseColors[8];
uniform vec3 specularColors[8];
uniform float attenuation[8];

attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
attribute vec2 texture0;
attribute vec2 texture1;
attribute vec2 texture2;
attribute vec2 texture3;

varying vec2 texCoord[4];
varying vec4 vertexColor;
varying vec3 fogVertexColor;
varying float fogWeight;
varying vec4 vertPos;

const vec4 WHITE = vec4(1,1,1,1);

void main() {

texCoord[0] = (textureMatrix * vec4(texture0, 0, 1)).xy;
texCoord[1] = texture1;
texCoord[2] = texture2;
texCoord[3] = texture3;

vec4 vertexPos = modelViewMatrix * position;
vertexColor = ambientColor + additionalColor;

vertPos=position; // Assign vertexPos instead for world space clipping

if (lightCount>0) {
// This is correct only if the modelview matrix is orthogonal. In jPCT-AE, it always is...unless you fiddle around with it.
vec3 normalEye   = normalize(modelViewMatrix * vec4(normal, 0.0)).xyz;

float angle = dot(normalEye, normalize(lightPositions[0] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[0] * angle + specularColors[0] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[0] - vertexPos.xyz)*attenuation[0])), 1);
}

// Freaky Adreno shader compiler can't handle loops without locking or creating garbage results....this is why the
// loop has been unrolled here. It's faster this way on PowerVR SGX540 too, even if PVRUniSCoEditor says otherwise...

if (lightCount>1) {
angle = dot(normalEye, normalize(lightPositions[1] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[1] * angle + specularColors[1] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[1] - vertexPos.xyz)*attenuation[1])), 1);
}

if (lightCount>2) {
angle = dot(normalEye, normalize(lightPositions[2] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[2] * angle + specularColors[2] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[2] - vertexPos.xyz)*attenuation[2])), 1);
}

if (lightCount>3) {
angle = dot(normalEye, normalize(lightPositions[3] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[3] * angle + specularColors[3] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[3] - vertexPos.xyz)*attenuation[3])), 1);
}

if (lightCount>4) {
angle = dot(normalEye, normalize(lightPositions[4] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[4] * angle + specularColors[4] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[4] - vertexPos.xyz)*attenuation[4])), 1);
}

if (lightCount>5) {
angle = dot(normalEye, normalize(lightPositions[5] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[5] * angle + specularColors[5] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[5] - vertexPos.xyz)*attenuation[5])), 1);
}

if (lightCount>6) {
angle = dot(normalEye, normalize(lightPositions[6] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[6] * angle + specularColors[6] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[6] - vertexPos.xyz)*attenuation[6])), 1);
}
if (lightCount>7) {
angle = dot(normalEye, normalize(lightPositions[7] - vertexPos.xyz));

if (angle > 0.0) {
vertexColor += vec4((diffuseColors[7] * angle + specularColors[7] * pow(angle, shininess))*(1.0/(1.0+length(lightPositions[7] - vertexPos.xyz)*attenuation[7])), 1);
}
}
}
}
}
}
}
}
}


if (fogStart != -1.0) {
fogWeight = clamp((-vertexPos.z - fogStart) / (fogEnd - fogStart), 0.0, 1.0);
fogVertexColor = fogColor * fogWeight;
} else {
fogWeight = -1.0;
}

vertexColor=vec4(min(WHITE, vertexColor).xyz, alpha);

if (useColors) {
vertexColor *= color;
}

gl_Position = modelViewProjectionMatrix * position;
}


The only difference to the normal default shader is the additional varying called vertPos. It takes the vertex position. Depending on your needs, this can either be the actual object space position (like here) or the world space position (like mentioned in the comment in the source).

And here's the fragment shader:


precision mediump float;

uniform sampler2D textureUnit0;
uniform sampler2D textureUnit1;
uniform sampler2D textureUnit2;
uniform sampler2D textureUnit3;

uniform int textureCount;
uniform int blendingMode[4];

uniform float startX;
uniform float endX;

varying vec2 texCoord[4];
varying vec4 vertexColor;
varying float fogWeight;
varying vec3 fogVertexColor;
varying vec4 vertPos;

const vec4 WHITE = vec4(1,1,1,1);

void main() {

if (vertPos.x<startX || vertPos.x>endX) {
    discard;
}

vec4 col = texture2D(textureUnit0, texCoord[0]) * vertexColor;

if (textureCount>1) {

// Can't index texture samplers and switch doesn't seem to compile(?)...end result:

int mode=blendingMode[1];
vec2 texCo=texCoord[1];

if (mode==0) {
// Modulate
col *= texture2D(textureUnit1, texCo);
} else if (mode==1) {
// Add
col += texture2D(textureUnit1, texCo);
} else if (mode==3) {
// Blend
col *= (WHITE - texture2D(textureUnit1, texCo));
} else if (mode==2) {
// Replace
col = texture2D(textureUnit1, texCo);
} else if (mode==4) {
// Decal
vec4 col2=texture2D(textureUnit1, texCo);
col = vec4(mix(col.rgb, col2.rgb, col2.a), col2.a);
}

if (textureCount>2) {

mode=blendingMode[2];
texCo=texCoord[2];

if (mode==0) {
// Modulate
col *= texture2D(textureUnit2, texCo);
} else if (mode==1) {
// Add
col += texture2D(textureUnit2, texCo);
} else if (mode==3) {
// Blend
col *= (WHITE - texture2D(textureUnit2, texCo));
} else if (mode==2) {
// Replace
col = texture2D(textureUnit2, texCo);
} else if (mode==4) {
// Decal
vec4 col2=texture2D(textureUnit2, texCo);
col = vec4(mix(col.rgb, col2.rgb, col2.a), col2.a);
}

if (textureCount>3) {

mode=blendingMode[3];
texCo=texCoord[3];

if (mode==0) {
// Modulate
col *= texture2D(textureUnit3, texCo);
} else if (mode==1) {
// Add
col += texture2D(textureUnit3, texCo);
} else if (mode==3) {
// Blend
col *= (WHITE - texture2D(textureUnit3, texCo));
} else if (mode==2) {
// Replace
col = texture2D(textureUnit3, texCo);
} else if (mode==4) {
// Decal
vec4 col2=texture2D(textureUnit3, texCo);
col = vec4(mix(col.rgb, col2.rgb, col2.a), col2.a);
}
}
}
}

if (fogWeight>-0.9) {
col.xyz = (1.0-fogWeight) * col.xyz + fogVertexColor;
}

gl_FragColor=col;
}





The only differences here are two additional uniforms (these will be set in the Activity in the setup code) that take the start and the end of the section to show (this example does this along the x-axis only) and the evaluation of these values against the new vertPos varying.

And the result is a cutted cube:



I hope that this somehow helps to get you started.

jiarongkoh

Hi Egon, thanks so much for replying. Seems like I have quite abit of catching up to do. Some queries and pls pardon me if some of these question sounds stupid:

1) To use the vertex shader and fragment shader, I notice that both of these src files are in a zipped file at lib/jpct_shaders.zip in the jpct-ae folder when I downloaded the package. So do I unzip and modify the shader and fragment shader?
2) And once I do that, I can place those two src code into my project and simply call loadTextFile like u did in the onSurfaceChanged method?
3) I am a little unsure how to use the setUniform method. What I understand from the docs and your sample activity above is defining the start and end point on the X axis to include in the shading and this is set at -4 and 4 as float values. These values will be passed into the fragment shader.src for computation, am I right to say that?

EgonOlsen

The zip file is an alternative to the files that are included in the jar, because some distribution method of some IDEs removed the shaders from the jar when deploying everything to the device. You can use them or extract the jar and use those. The ones in the jar might be a little more current, but it actually shouldn't matter.
The default shaders mimic the fixed function pipeline. The one that i used in my example is the most complex one that covers all possible cases. The other variants are simplified ones for some special cases that jPCT picks for performance reasons if possible.
To 2.): Yes, that's basically it.
To 3.): Yes, that's the way it works. Just make sure to use actual floats (like 4f instead of just 4) in your calls or you'll use the integer variant of setUniform instead and that won't work properly with a float uniform.

jiarongkoh

Hi Egon, your code work wonders, thanks so much.

I am exploring one more feature, that is to set transparency on the 'cutted away' section. My approach is to alter the alpha value in the vertexShader.src by using setUniform("shader", 5f), but it doesn't seem to set any opaque-ness at all. My reason for doing this instead of simply using setTransparency is because I wanna show the full 3D model, ie the 'remaining-cutted' section be shown will full solid renders while the 'cutted-away' section be shown with some opaqueness. This way I think its possible for me to setShaders twice (i'm not sure if its possible yet), one shader to define the solid shading for remaining cutted section, while the second shader to define the opaque shading for the cut away section.

Any suggestion?

jiarongkoh

Oh, I tried another approach and it works. I cloned the model, and I setShadersOpaque on one model, and setShadersSolid on the other, and add both into world. Looks not bad :)