FrameBuffer.getPixels()

Started by aZen, March 04, 2014, 11:46:55 PM

Previous topic - Next topic

aZen

Could you explain how the int array FrameBuffer.getPixels() is structured?

I'm having a hard time understanding how it works when oversampling. In normal mode I can simple access it in my software shader and everything works as expected. When using oversampling the pixels are "distorted" and I can not predict which pixel I am accessing. Let me clarify that a bit.


(click to enlarge)

So with normal sampling there is a one to one map from the zBuffer to the getPixels() buffer. With oversampling I was expecting a 4 -> 1 map, but that is not the case. In the "Oversampling" example in the picture I'm using a 2 -> 1 map (i.e. pixels[c/2] = zBuffer[c]). I can not figure out how to do it correctly and have already spent way to much time on it.

So my question is: How do the zBuffer and the getPixels() array relate when using oversampling?

Edit: This is what I get when using a 4->1 mapping.

(click to enlarge)

aZen

Mhmm, I figured it out. It seems that the height is stretched while the width is not?! Is that by design?

Anyway, the correct relation is:

   pixels[(c/(w*2*2))*w + (c/2)%w] = zBuffer[c]

Note that the division rounds down, so you can not remove it!

EgonOlsen

It's organized just like the normal framebuffer but with twice the resolution, i.e. each scanline is 2*width and we have 2*height lines of them.
One pixel in the final image is composed out of a quad of pixels from two scanlines, for example:


scanline 1: 11_22_33_44...
scanline 2: 11_22_33_44...


1 are all pixels that are taken into account for the first pixel in the first scanline of the final image, 2 for the second and so on.

aZen

Ah, yeah I guess that makes sense. I mixed up the conversion in my head, that's why I didn't get it at first.

Thank for explaining!