Off screen rendering

Started by mxar, March 04, 2015, 09:38:02 PM

Previous topic - Next topic

mxar


Hi,

Im developing an Augmented reallity application in Android device.

One of the requirements of the project is to do off screen rendering of 3d models and then manipulate the result of rendering.

I used the FrameBuffer.readPixels() method to create a bitmap but this method is to slow for my needs.

Do you  suggest a better and faster way?

The projectCenter3D2D(Camera camera, FrameBuffer buffer, Object3D obj)  method returns the center of the object in screen-coordinates (2D) by transforming and projecting it from 3D objectspace into 2D screenspace.

How can i calculate the start position (x,y), width and height of a bounding rectangle of Object3D obj in 2D screenspace?

Thanks in advance.



EgonOlsen

If you want to manipulate the image with Android's default image API, there's no other way than getPixels(). You can improve performance a little by using the method that takes an int[]-array to avoid the creation of a new instance each time you are calling this method.
Another way is to render into a texture. You can do this by assigning a Texture- or NPOTTexture-instance to the FrameBuffer as render target to render the scene into that one. However, image manipulation has to happen in a custom shader then as a second pass. This also makes sense only if you want to use the manipulated image for blitting it on screen. If you want to save it to sdcard or something similar, you are back to getPixels() anyway.
It might help to know what kind of image manipulation you have in mind... ???

About the 2d coords: You can calculate an object's bounding box in world space like so: http://www.jpct.net/wiki/index.php/Getting_Worldspace_Bounds and than use http://www.jpct.net/jpct-ae/doc/com/threed/jpct/Interact2D.html#project3D2D(com.threed.jpct.Camera, com.threed.jpct.FrameBuffer, com.threed.jpct.SimpleVector) to calulate it's 2d projection. Out of the calculated 2d values, you still have to figure out the bounds, because min and max in 3d don't mean that corresponding projection is min and max in 2d as well. Depending on the camera, it could be reversed.

mxar

Thanks for the answer.

The problem with the performance is when i call the FrameBuffer.readPixels[]. I think that is slow.

It would be faster if i call FrameBuffer.setRenderTarget with NPOT texture?.

The Bitmap manipulation (the pixels from readPixels()) is done in C++ using NDK ,in this way the manipulation is fast.

So I must find a fast way to read the pixels of the FrameBuffer.

Is there a way to offscreen render 3D models without the use of FrameBuffer?

Thanks in advance.



EgonOlsen

There is no way to get fast access to a frame buffer's pixels. That's by design of all current graphics hardware and drivers, it's not an engine limitation. You can render into the frame buffer directly or into a frame buffer object, which is what the render target solution, that i mentioned above, does. In both cases, the pixels are present on the GPU and to get them out is expensive. That's not what GPU's are designed for. The only solution that is really fast is to manipulate the image on the GPU by using a shader (as mentioned).
If that's feasible depends on the kind image manipulation that you have in mind, hence my question above.

mxar

Thanks,

i think the best solution is doing the image manipulation using shaders.It bust be the faster solution.

Do you have any example of image manipulation using shaders, or do you know a tutorial?

I just want to get an idea about it.


Many thanks.




EgonOlsen

The basic idea is to render into a texture, then use that texture for another object that fills the whole scene and render that object with the image manipulation shader. But as said multiple times: It would make it easier to help if i know what kind of image manipulation you have in mind (for example: creating a negative image: Simple, blurring the image: possible, applying some crazy full screen effect: more complicated)...

mxar

Hi

In the AR Game is needed to render offscreen some 3d models (space ships,planets ...).
Perhaps is good idea to do the offscreen rendering on a NPOT texture.(The screen width and height are not power of 2 size).

When the offscreen rendering is completed i want to apply some effects on the NPOT Texture before rendering this texture on screen.

I want the black pixels of the NPOT texture to be replaced by the corresponding pixels of an image.(If a pixel of the NPOT texture in x,y position is black then must be replaced by the image's pixel located in the same x,y).

After that i want to replace the white pixels of the modified NPOT texture by the corresponding pixels of another texture this time.(If a pixel of the NPOT texture in x,y position is white then must be replaced by the other's texture pixel located in the same x,y).

Finally the modified NPOT Texture is rendered on screen.

Using FrameBuffer.getPixels() slows down the game because i need to apply the effects many times.

So using shaders i believe is a good solution.

Do you have an example of shaders to help me?

Thanks in advance.





EgonOlsen

Where do these images that should be mixed with the rendered scene come from?

mxar


These images are just to create effects and wil be  created by me.
Will be stored in the assets folder.

Actually the first image is an array of images which are called one next to other creating animation (like a movie).These images are called with high frame rate.Thats why i want to make the color replacement fastly.

Thanks in advance.


mxar

Hi again,

I  assigned a  NPOTTexture  to the FrameBuffer as render target to render the scene into .

How can i convert the NPOTTexture  to a bitmap?

Thanks in advance.

EgonOlsen

You can't. That's not the point of doing it this way. You basically have to render into the texture, setup another scene with a simple plane that uses this texture, assign your two other textures in addition to the same plane and write a basic shader that does the mixing.
If it helps, i can try to create a small demo of this approach within the next few days.

mxar


Thank you for the answer,

At the beginning i create an Overlay object sceneImageOverlay having a NPOTTexture npotTexture as texture.
The overlay has the screen sizes, screenWidth,screenHeight.

The overlay is created in the onSurfaceChanged method


   npotTexture = new NPOTTexture(screenWidth,screenHeight,new RGBColor(255, 255, 0) );//null);//null for rendering
        textureManager.addTexture("npotTexture ", npotTexture );
   
         sceneImageOverlay = new Overlay(world, 0, 0, screenWidth,screenHeight, "npotTexture ", true);
       sceneImageOverlay.setDepth(900);
       sceneImageOverlay.setVisibility(false);
       sceneImageOverlay.setSourceCoordinates(0, 0, screenWidth, screenHeight);



In the application i render off screen 2 3d objects. After that i add the removed sceneImageOverlay showing the NPOT texture with the rendered 2 3d objects.


In onDrawFrame method i do this

world.removeObject(sceneImageOverlay.getObject3D());//remove the overlay,must not participate in the off screen rendering.
      
      
       frameBuffer.setRenderTarget(npotTexture );//use for rendering the NPOT texture with screenWidth and screenHeight dimensions
      
       //clear screen
       frameBuffer.clear(back);
      
      ObjectA_3D.setVisibility(true);   //must exist in the off screen rendering
      ObjectB_3D.setVisibility(true);    //must exist in the off screen rendering
        world.renderScene(frameBuffer);
      world.draw(frameBuffer);
      frameBuffer.display();

         //--- till here the npot texture has the two 3d objects drawn in it's surface
      
        //---- NOW SHOW THE OVERLAY WITH THE npot_Texture TEXTURE -------
   
   world.addObject(sceneImageOverlay.getObject3D());//add the overlay because has the npot texture.
   sceneImageOverlay.getObject3D().setVisibility(true);

   frameBuffer.removeRenderTarget();//use the on screen renderer
   frameBuffer.clear(back);
   ObjectA_3D.setVisibility(false);   //we dont want it to participate in the on screen rendering
   ObjectB_3D.setVisibility(false);       //we dont want it to participate in the on screen rendering
   world.renderScene(frameBuffer);
   world.draw(frameBuffer);
   frameBuffer.display();


Finally on screen the sceneImageOverlay's npot texture is displayed.

How do you think about this solution?

I used overlay instead of plane, i dont know which is the better solution.

Now i want to apply some effects on this NPOT texture.

Behind sceneImageOverlay is drawn another overlay with depth =1000 ,which has on it's texture a captured image.

So i want to replace the red pixels of the sceneImageOverlay with transparent pixels to show on screen the captured image's pixels of the same position with red pixels.

I think i can use a fragment shader's discard command to do this.

The next step is to replace the blue pixels of sceneImageOverlay with the corresponding pixels of another texture.

Till now i havent found a solution.I think a small demo from you  will be very helpfull.

The displayed npot texture is shown upside down.How can I fix this problem?


Thanks in advance

EgonOlsen

Using the Overlay is actually a good idea, but reusing the same world for both passes isn't. Put your scene in one and the overlay in another world and remove all this add/remove stuff.

Anyway, here's a simple, hacky example:

Java code:

package com.threed.jpct.example.mixing;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.app.Activity;
import android.content.res.AssetManager;
import android.opengl.GLSurfaceView;
import android.os.Bundle;

import com.threed.jpct.Camera;
import com.threed.jpct.Config;
import com.threed.jpct.FrameBuffer;
import com.threed.jpct.GLSLShader;
import com.threed.jpct.Light;
import com.threed.jpct.Loader;
import com.threed.jpct.Logger;
import com.threed.jpct.Matrix;
import com.threed.jpct.NPOTTexture;
import com.threed.jpct.Object3D;
import com.threed.jpct.RGBColor;
import com.threed.jpct.SimpleVector;
import com.threed.jpct.Texture;
import com.threed.jpct.TextureInfo;
import com.threed.jpct.TextureManager;
import com.threed.jpct.World;
import com.threed.jpct.util.ExtendedPrimitives;
import com.threed.jpct.util.MemoryHelper;
import com.threed.jpct.util.Overlay;

/**
* @author EgonOlsen
*
*/
public class MixingExample extends Activity {

private GLSurfaceView mGLView;
private MyRenderer renderer = null;
private FrameBuffer buffer = null;
private World world = null;
private NPOTTexture target = null;
private World frontWorld = null;
private Overlay viewPlane = null;

private RGBColor back = new RGBColor(50, 50, 100);
private RGBColor ambient = new RGBColor(20, 20, 20);
private RGBColor front = ambient;

private Object3D cube = null;
private Object3D sphere = null;
private int fps = 0;
private Light sun = null;

protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mGLView = new GLSurfaceView(getApplication());
mGLView.setEGLContextClientVersion(2);
renderer = new MyRenderer();
mGLView.setRenderer(renderer);
setContentView(mGLView);

// Important!
Config.maxTextureLayers = 4;
}

@Override
protected void onPause() {
super.onPause();
mGLView.onPause();
}

@Override
protected void onResume() {
super.onResume();
mGLView.onResume();
}

@Override
protected void onStop() {
super.onStop();
System.exit(0);
}

protected boolean isFullscreenOpaque() {
return true;
}

class MyRenderer implements GLSurfaceView.Renderer {

private long time = System.currentTimeMillis();

public MyRenderer() {
}

public void onSurfaceChanged(GL10 gl, int w, int h) {
try {
AssetManager assets = MixingExample.this.getBaseContext().getAssets();
TextureManager tm = TextureManager.getInstance();

buffer = new FrameBuffer(w, h);

world = new World();
world.setAmbientLight(ambient.getRed(), ambient.getGreen(), ambient.getBlue());

frontWorld = new World();

sun = new Light(world);
sun.setIntensity(250, 250, 250);

tm.addTexture("texture", new Texture(assets.open("expo_rocks.png")));
tm.addTexture("mix1", new Texture(assets.open("expo_stain.png")));
tm.addTexture("mix2", new Texture(assets.open("harz_rocks.png")));

cube = ExtendedPrimitives.createCube(20);
cube.setTexture("texture");
cube.build();
world.addObject(cube);

sphere = ExtendedPrimitives.createSphere(10, 20);
sphere.build();
sphere.setLighting(Object3D.LIGHTING_NO_LIGHTS);
sphere.translate(10, -20, 0);
world.addObject(sphere);

Camera cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
cam.lookAt(cube.getTransformedCenter());

SimpleVector sv = new SimpleVector();
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;
sun.setPosition(sv);

target = new NPOTTexture(w, h, back);
tm.addTexture("target", target);

TextureInfo ti = new TextureInfo(tm.getTextureID("target"));
ti.add(tm.getTextureID("mix1"), TextureInfo.MODE_BLEND);
ti.add(tm.getTextureID("mix2"), TextureInfo.MODE_BLEND);

// Setup the actually visible plane that shows the scene
Matrix flipTextureMat = new Matrix();
flipTextureMat.set(1, 1, -1);
flipTextureMat.translate(0, 1, 0);

viewPlane = new Overlay(frontWorld, buffer, null);
viewPlane.setTexture(ti);
viewPlane.getObject3D().setTextureMatrix(flipTextureMat);

// Assign the shader that does the mixing
GLSLShader shader = new GLSLShader(Loader.loadTextFile(assets.open("vertexShader.src")), Loader.loadTextFile(assets.open("fragmentShader.src")));
shader.setUniform("backColor", new float[] { back.getRed() / 255f, back.getGreen() / 255f, back.getBlue() / 255f });
shader.setUniform("frontColor", new float[] { front.getRed() / 255f, front.getGreen() / 255f, front.getBlue() / 255f });
viewPlane.getObject3D().setShader(shader);

MemoryHelper.compact();
} catch (Exception e) {
throw new RuntimeException(e);
}
}

public void onSurfaceCreated(GL10 gl, EGLConfig config) {
}

public void onDrawFrame(GL10 gl) {
cube.rotateY(0.1f);
cube.rotateX(0.1f);

buffer.setRenderTarget(target);
buffer.clear(back);
world.renderScene(buffer);
world.draw(buffer);
buffer.display();
buffer.removeRenderTarget();

buffer.clear(back);
frontWorld.renderScene(buffer);
frontWorld.draw(buffer);
buffer.display();

if (System.currentTimeMillis() - time >= 1000) {
Logger.log(fps + "fps");
fps = 0;
time = System.currentTimeMillis();
}
fps++;
}
}
}


The shaders are derived from one of the simpler default shaders that jPCT-AE comes with.

Vertex shader:

uniform mat4 modelViewProjectionMatrix;

uniform vec4 additionalColor;
uniform vec4 ambientColor;

uniform float alpha;
uniform bool useColors;

uniform mat4 textureMatrix;

attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
attribute vec2 texture0;
attribute vec2 texture1;
attribute vec2 texture2;

varying vec2 texCoord[3];
varying vec4 vertexColor;

const vec4 WHITE = vec4(1,1,1,1);

void main() {
texCoord[0] = (textureMatrix * vec4(texture0, 0, 1)).xy;
texCoord[1] = texture1;
texCoord[2] = texture2;

vertexColor=vec4(min(WHITE, ambientColor + additionalColor).xyz, alpha);

if (useColors) {
vertexColor *= color;
}

gl_Position = modelViewProjectionMatrix * position;
}


Fragment shader:

precision highp float;

uniform sampler2D textureUnit0;
uniform sampler2D textureUnit1;
uniform sampler2D textureUnit2;

uniform vec3 backColor;
uniform vec3 frontColor;

varying vec2 texCoord[3];
varying vec4 vertexColor;

void main() {
vec4 color=texture2D(textureUnit0, texCoord[0]) ;

if (length(color.rgb-backColor)<0.0001) {
color=texture2D(textureUnit1, texCoord[1]);
} else if (length(color.rgb-frontColor)<0.0001) {
color=texture2D(textureUnit2, texCoord[2]);
} else {
color*=vertexColor;
}

gl_FragColor= color;
}


Result:



Explanation:
It's similar to your approach in that it uses an Overlay to display the scene. It uses two instances of World for this. The texture flip on the Overlay is done by using a texture matrix (the flip comes from OpenGL's screen coordinate system BTW). The Overlay object itself uses three texture layers (hence the Config adjustment in onCreate, because default is 2). The first one is the NPOTTexture that contains the actual scene. The second one is the first image to mix, the third one is the other image to mix. The fragment shader than mixes them all based on the actual pixel's color. It does this by using a simple distance check between the rendered color and the actual color to replace. If it's below a certain threshold (which i made up out of thin air), it replaces the pixel with the texture of either the second of the third stage. In the example image, this applies to the background and to the sphere's color (which is just ambient color).

mxar

Many Thanks!!!!  :)

The hacky example is realy very helpful.

Is a good example for my needs.

Thank you very much.


mxar

Hi,

can Config.maxTextureLayers be greater than 4 ?

thanks in advance