Recent

Author Topic: descrambing a texture in shaders  (Read 6059 times)

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11382
  • FPC developer.
descrambing a texture in shaders
« on: March 03, 2018, 02:22:20 pm »
Since there is some opengl activity again, I have another issue lingering, this is more a feasability question.

From a hardware device I get an image that is somewhat strangely coded. I only have to show it using opengl, so it would pay (performancewise) to do this using shaders.

The image is 1280x1024, and internally subdivided into 128x128 images, so ten by eight. That is how the resulting image should look like.

However due to limitations in the hardware device, the lines of each subimage are written in memory after each other.

So the first 128x128 subimage occupies the first 12-13 lines of the image, the second 128x128 the next 12-13 lines etc. (128/10 = 12 lines, and 8*128=1024 bytes on the thirteenth)

One could view the image as a single column of 80 images (128 x 80*128) wide though, but that would make the source and destination different in size.

So basically I need to do 80 small draws when drawing a texture efficiently, and the input is oddly packed.

Anybody encountered something like this? Concepts or keywords to search for ?

ChrisR

  • Full Member
  • ***
  • Posts: 247
Re: descrambing a texture in shaders
« Reply #1 on: March 03, 2018, 03:43:52 pm »
marcov

Assuming all your 2D tiles are precisely the same size (128x128 in your example), the elegant solution is to load them all to your graphics card as a single 3D texture. Then you can use a single shader for all your draw calls
  glBindTexture(GL_TEXTURE_3D,  tex);
this will create a volume which has size X*Y*Z (128*128*80). When you draw each tile you can describe the Z dimension as the 'depth'. The vertex shader encodes the horizontal offset, vertical offset and depth. The resulting fragment shader is simply:

#version 330
in vec3 xyz;
out vec4 color;
uniform sampler3D tex;
void main() {
    color = texture(tex, xyz);


Projects 3 and 4 of this
  https://github.com/neurolabusc/OpenGLCoreTutorials
illustrate 2D and 3D textures that you can use as a basis for your solution.

ykot

  • Full Member
  • ***
  • Posts: 141
Re: descrambing a texture in shaders
« Reply #2 on: March 03, 2018, 04:06:07 pm »
If I understand correctly, each sub-image is simply stored linearly in source 1280x1024 texture ("scrambled"), but you want to draw it as 1280x1024 with each image in its appropriate row, column, right?

You just need to render quad and in fragment shader calculate the appropriate source coordinates. The following fragment shader should do the trick, just attach the source texture and draw 1280x1024 quad on the screen:

Code: [Select]
#version 330

uniform sampler2D textureScrambled;
out vec4 outputColor;

void main()
{
  ivec2 screenPos = ivec2(gl_FragCoord.xy);

  // Local position in each sub-image.
  ivec2 posInImage = ivec2(screenPos.x % 128, screenPos.y % 128);

  // Position of each sub-image on the screen.
  ivec2 imagePos = ivec2(screenPos.x / 128, screenPos.y / 128);

  // Linear image index on the screen.
  int imageIndex = imagePos.x + imagePos.y * 10;

  // Linear position of the pixel on the screen.
  int linearPos = imageIndex * 16384 + posInImage.y * 128 + posInImage.x;

  // Source coordinates of scrambled image.
  ivec2 srcPos = ivec2(linearPos % 1280, linearPos / 1280);

  // Read source pixel and draw it at correct (current) location.
  return texelFetch(textureScrambled, srcPos, 0);
}

Alternatively, you could use a Compute shader, where you would load pixels for each sub-image to shared (groupshared) memory, issue barrier then write the sub-image data to the appropriate location in destination texture. This would probably have better performance but require OpenGL 4.3+.

Edit: fixed data type for "linearPos", which should be "int".
« Last Edit: March 03, 2018, 04:54:45 pm by ykot »

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11382
  • FPC developer.
Re: descrambing a texture in shaders
« Reply #3 on: March 03, 2018, 04:28:08 pm »
Ykot's solution looks interesting, since it would allow seamless integration with what I have.

Only question I have is gl_FragCoord.xy. Aren't most coordinates normalized (0..1) in later opengls and not "pixels"? Do I have to configure something for that?

The Khronos page for this identifier seems to also indicate that.
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_FragCoord.xhtml

Or am I misunderstanding something?

Of course that could be worked around by having an uniform with the real sizes and multiplying.

ChrisR's solution is also interesting, for a screen that shows each 128x128 individually.  Something like that is maybe needed for a later stage of the project, and hadn't really thought about it, but treating it as a 3D texture would be a solution.




ykot

  • Full Member
  • ***
  • Posts: 141
Re: descrambing a texture in shaders
« Reply #4 on: March 03, 2018, 04:54:07 pm »
gl_FragCoord uses window-space non-normalized coordinates, so "x" will be in range of [0, width) and "y" in range of [0, height).

The only issue would come if you want to draw 1280x1024 quad on the screen not starting from (0, 0). In this case, when rendering full-screen quad, you can pass normalized source texture coordinates - just make sure that top/left is (0, 0) and bottom/right is (1, 1):
Code: [Select]
#version 330

uniform sampler2D textureScrambled;
in vec2 texCoord;
out vec4 outputColor;

void main()
{
  ivec2 screenPos = ivec2(texCoord * textureSize(textureScrambled, 0));

  // Local position in each sub-image.
  ivec2 posInImage = ivec2(screenPos.x % 128, screenPos.y % 128);

  // Position of each sub-image on the screen.
  ivec2 imagePos = ivec2(screenPos.x / 128, screenPos.y / 128);

  // Linear image index on the screen.
  int imageIndex = imagePos.x + imagePos.y * 10;

  // Linear position of the pixel on the screen.
  int linearPos = imageIndex * 16384 + posInImage.y * 128 + posInImage.x;

  // Source coordinates of scrambled image.
  ivec2 srcPos = ivec2(linearPos % 1280, linearPos / 1280);

  // Read source pixel and draw it at correct (current) location.
  return texelFetch(textureScrambled, srcPos, 0);
}

Note that you can also store your texture as Texture1D and simply use "linearPos" from above code to fetch it. Similarly, as per ChrisR's answer, store the texture as Texture3D and use (posInImage, imageIndex) as source 3D coordinates.

ChrisR

  • Full Member
  • ***
  • Posts: 247
Re: descrambing a texture in shaders
« Reply #5 on: March 04, 2018, 04:28:13 pm »
One thought regarding both methods suggested is that the interpolation may not be pretty if you do not output at the same resolution as the input. My preference would be to de-tile the image on the CPU and send a single 1280x1024 2D texture to the GPU.

marcov

  • Administrator
  • Hero Member
  • *
  • Posts: 11382
  • FPC developer.
Re: descrambing a texture in shaders
« Reply #6 on: March 04, 2018, 06:32:24 pm »
ChrisR: That's what I have done in the first tests. Since I can adapt to work on the scrambled image, I only have to descramble before display, so the performance is not(yet) such an issue.

There are still some white spots in the project though, so I want to get a feel in time for what works and what not. Luckily it only comes into play in May.

Currently I'm wrestiling with the Verilog FPGA code, which is a first for me.

 

TinyPortal © 2005-2018