如何在WebGL中使用大量纹理进行计算

问题描述 投票:0回答:2

只关注单个顶点/片段着色器对的制服/属性/变化,我想知道如何使用textures建模以下系统。专注于2D。

  • position:当前对象的position
  • 翻译:提出的对象next position基于前面的一些CPU计算。
  • 速度:物体速度。
  • rotation:对象下一次旋转。
  • 力(如重力或碰撞):物体在每个方向上作用于它的总和力。
  • 温度:物体的温度。
  • 质量/密度:物体的质量/密度。
  • 曲率:沿预定义曲线移动(如缓动)。

起初我想这样做:

attribute vec3 a_position;
attribute vec3 a_translation;
attribute vec3 a_velocity;
attribute vec3 a_rotation;
attribute vec3 a_force;
attribute vec3 a_temperature;
attribute vec3 a_material; // mass and density
attribute vec4 a_color;
attribute vec4 a_curvature;

但这可能会遇到too many attributes的问题。

所以我记得using textures这个。没有太多的细节,我只是想知道如何构建制服/属性/变化来实现这一目标。

attribute vec2 a_position_uv;
attribute vec2 a_translation_uv;
attribute vec2 a_velocity_uv;
attribute vec2 a_rotation_uv;
attribute vec2 a_force_uv;
attribute vec2 a_temperature_uv;
attribute vec2 a_material_uv;
attribute vec2 a_color_uv;
attribute vec2 a_curvature_uv;

如果我们这样做,所有属性都引用纹理坐标,那么纹理可能存储vec4数据,因此我们可以避免太多属性问题。

但我现在不确定如何为两个着色器定义纹理。想知道它是否就像这样:

uniform sampler2D u_position_texture;
uniform sampler2D u_translation_texture;
uniform sampler2D u_velocity_texture;
uniform sampler2D u_rotation_texture;
uniform sampler2D u_force_texture;
uniform sampler2D u_temperature_texture;
uniform sampler2D u_material_texture;
uniform sampler2D u_color_texture;
uniform sampler2D u_curvature_texture;

然后在顶点着色器中的main中,我们可以使用纹理来计算位置。

void main() {
  vec4 position = texture2D(u_position_texture, a_position_uv);
  vec4 translation = texture2D(u_translation_texture, a_translation_uv);
  // ...
  gl_Position = position * ...
}

这样我们在顶点着色器中不需要任何varyings来传递颜色,除非我们想在片段着色器中使用我们的计算结果。但我可以把这部分搞清楚。现在我只想知道是否可以像这样构建着色器,所以最终的顶点着色器将是:

attribute vec2 a_position_uv;
attribute vec2 a_translation_uv;
attribute vec2 a_velocity_uv;
attribute vec2 a_rotation_uv;
attribute vec2 a_force_uv;
attribute vec2 a_temperature_uv;
attribute vec2 a_material_uv;
attribute vec2 a_color_uv;
attribute vec2 a_curvature_uv;

uniform sampler2D u_position_texture;
uniform sampler2D u_translation_texture;
uniform sampler2D u_velocity_texture;
uniform sampler2D u_rotation_texture;
uniform sampler2D u_force_texture;
uniform sampler2D u_temperature_texture;
uniform sampler2D u_material_texture;
uniform sampler2D u_color_texture;
uniform sampler2D u_curvature_texture;

void main() {
  vec4 position = texture2D(u_position_texture, a_position_uv);
  vec4 translation = texture2D(u_translation_texture, a_translation_uv);
  // ...
  gl_Position = position * ...
}

最后的片段着色器可能是:

uniform sampler2D u_position_texture;
uniform sampler2D u_translation_texture;
uniform sampler2D u_velocity_texture;
uniform sampler2D u_rotation_texture;
uniform sampler2D u_force_texture;
uniform sampler2D u_temperature_texture;
uniform sampler2D u_material_texture;
uniform sampler2D u_color_texture;
uniform sampler2D u_curvature_texture;

varying vec2 v_foo
varying vec2 v_bar

void main() {
  // ...
  gl_Color = position * ... * v_foo * v_bar
}
webgl gpu textures shader
2个回答
1
投票

LJ的答案可以说是正确的,但是如果你想在纹理中存储数据,你需要的是每个顶点的索引

attribute float index;

然后,您可以从中计算UV坐标

uniform vec2 textureSize;  // size of texture

float numVec4sPerElement = 8.;
float elementsPerRow = floor(textureSize.x / numVec4sPerElement);
float tx = mod(index, elementsPerRow) * numVec4sPerElement;
float ty = floor(index / elementsPerRow);
vec2 baseTexel = vec2(tx, ty) + 0.5;

现在你可以提取数据了。 (注意:假设它是浮动纹理)

vec4 position    = texture2D(dataTexture, baseTexel / textureSize);
vec4 translation = texture2D(dataTexture, (baseTexel + vec2(1,0)) / textureSize);
vec4 velocity    = texture2D(dataTexture, (baseTexel + vec2(2,0)) / textureSize);
vec4 rotation    = texture2D(dataTexture, (baseTexel + vec2(3,0)) / textureSize);
vec4 forces      = texture2D(dataTexture, (baseTexel + vec2(4,0)) / textureSize);

等等...

当然,您可以更多地交错数据。就像说上面的位置是vec4也许position.w是重力,翻译.w是质量等等...

然后,您将数据放在纹理中

position0, translation0, velocity0, rotation0, forces0, .... 
position1, translation1, velocity1, rotation1, forces1, .... 
position2, translation2, velocity2, rotation2, forces2, .... 
position2, translation3, velocity3, rotation3, forces3, .... 

const m4 = twgl.m4;
const v3 = twgl.v3;
const gl = document.querySelector('canvas').getContext('webgl');
const ext = gl.getExtension('OES_texture_float');
if (!ext) {
  alert('need OES_texture_float');
}


const vs = `
attribute float index;

uniform vec2 textureSize;
uniform sampler2D dataTexture;

uniform mat4 modelView;
uniform mat4 projection;

varying vec3 v_normal;
varying vec4 v_color;

void main() {
  float numVec4sPerElement = 3.;  // position, normal, color
  float elementsPerRow = floor(textureSize.x / numVec4sPerElement);
  float tx = mod(index, elementsPerRow) * numVec4sPerElement;
  float ty = floor(index / elementsPerRow);
  vec2 baseTexel = vec2(tx, ty) + 0.5;

  // Now you can pull out the data.

  vec3 position = texture2D(dataTexture, baseTexel / textureSize).xyz;
  vec3 normal   = texture2D(dataTexture, (baseTexel + vec2(1,0)) / textureSize).xyz;
  vec4 color    = texture2D(dataTexture, (baseTexel + vec2(2,0)) / textureSize);

  gl_Position = projection * modelView * vec4(position, 1);

  v_color = color;
  v_normal = normal;
}
`;

const fs = `
precision highp float;

varying vec3 v_normal;
varying vec4 v_color;

uniform vec3 lightDirection;

void main() {
  float light = dot(lightDirection, normalize(v_normal)) * .5 + .5;
  gl_FragColor = vec4(v_color.rgb * light, v_color.a);
}
`;

// compile shader, link, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);

// make some vertex data
const radius = 1;
const thickness = .3;
const radialSubdivisions = 20;
const bodySubdivisions = 12;
const verts = twgl.primitives.createTorusVertices(
    radius, thickness, radialSubdivisions, bodySubdivisions);
/*
  verts is now an object like this
  
  {
    position: float32ArrayOfPositions,
    normal: float32ArrayOfNormals,
    indices: uint16ArrayOfIndices,
  }
*/

// covert the vertex data to a texture
const numElements = verts.position.length / 3;
const vec4sPerElement = 3;  // position, normal, color
const maxTextureWidth = 2048;  // you could query this
const elementsPerRow = maxTextureWidth / vec4sPerElement | 0;
const textureWidth = elementsPerRow * vec4sPerElement;
const textureHeight = (numElements + elementsPerRow - 1) /
                      elementsPerRow | 0;

const data = new Float32Array(textureWidth * textureHeight * 4);
for (let i = 0; i < numElements; ++i) {
  const dstOffset = i * vec4sPerElement * 4;
  const posOffset = i * 3;
  const nrmOffset = i * 3;
  data[dstOffset + 0] = verts.position[posOffset + 0];
  data[dstOffset + 1] = verts.position[posOffset + 1];
  data[dstOffset + 2] = verts.position[posOffset + 2];
  
  data[dstOffset + 4] = verts.normal[nrmOffset + 0];
  data[dstOffset + 5] = verts.normal[nrmOffset + 1];
  data[dstOffset + 6] = verts.normal[nrmOffset + 2];  
  
  // color, just make it up
  data[dstOffset +  8] = 1;
  data[dstOffset +  9] = (i / numElements * 2) % 1;
  data[dstOffset + 10] = (i / numElements * 4) % 1;
  data[dstOffset + 11] = 1;
}

// use indices as `index`
const arrays = {
  index: { numComponents: 1, data: new Float32Array(verts.indices), },
};

// calls gl.createBuffer, gl.bindBuffer, gl.bufferData
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);

const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, textureWidth, textureHeight, 0, gl.RGBA, gl.FLOAT, data);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

function render(time) {
  time *= 0.001;  // seconds
  
  twgl.resizeCanvasToDisplaySize(gl.canvas);
  
  gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
  gl.enable(gl.DEPTH_TEST);
  gl.enable(gl.CULL_FACE);

  const fov = Math.PI * 0.25;
  const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  const near = 0.1;
  const far = 20;
  const projection = m4.perspective(fov, aspect, near, far);
  
  const eye = [0, 0, 3];
  const target = [0, 0, 0];
  const up = [0, 1, 0];
  const camera = m4.lookAt(eye, target, up);
  const view = m4.inverse(camera);

  // set the matrix for each model in the texture data
  const modelView = m4.rotateY(view, time);
  m4.rotateX(modelView, time * .2, modelView);
  
  gl.useProgram(programInfo.program);
  
  // calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
  twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
  
  // calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
  twgl.setUniforms(programInfo, {
    lightDirection: v3.normalize([1, 2, 3]),
    textureSize: [textureWidth, textureHeight],
    projection: projection,
    modelView: modelView,
  });  
  
  // calls gl.drawArrays or gl.drawElements
  twgl.drawBufferInfo(gl, bufferInfo);

  requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>

请注意,从纹理中提取数据比从属性中获取数据要慢。多慢可能取决于GPU。不过,它可能比你正在考虑的任何替代方案都要快。

您可能也有兴趣使用纹理来批量绘制调用。有效地存储纹理中传统制服的东西。

https://stackoverflow.com/a/54720138/128511


2
投票

您链接的问题不是太多的属性,而是太多的变化,99.9% of WebGL implementations support up to 16 attributes不仅与大多数平台上支持的最大纹理单元数相当,但应该没问题,假设您不需要传输所有数据顶点到片段着色器。如果你没有做任何更大的批量,你可能只是开始使用制服。这就是说,如果你因为某种原因决定使用纹理,你可能只使用一个UV坐标并对齐所有数据纹理,否则你实际上只是几乎加倍了你的带宽需求。

除此之外,您的数据集本身可以进行相当多的压缩。您可以将positionrotation存储为四元数(在2D中甚至可以使用带有x,y,α的vec3)velocitytorque(原始数据集中缺少的)实际上只是当前位置和下一个位置的增量,所以你只需要存储其中一组(速度/扭矩或下一个位置/旋转),force似乎无关紧要,因为你在CPU上使用它们,masstemperature是标量值,因此它们完全适合于一个vec2与其他一些爵士乐。但是我试图理解它越多,看起来越不成熟,你不能真正在GPU上进行模拟,但是你的一半属性是渲染不需要的模拟属性,感觉就像你过早地优化了一些甚至还没有接近现有的东西,所以建议:只需建立并看到。

© www.soinside.com 2019 - 2024. All rights reserved.