When it comes to computing a multidimensional array of data, I believe that GPUComputationRenderer is the ideal solution.
I initially began from an example project and encountered a roadblock:
var gpuCompute = new GPUComputationRenderer( 100, 100, this.renderer );
var dtPosition = gpuCompute.createTexture();
var dtVelocity = gpuCompute.createTexture();
var velocityVariable = gpuCompute.addVariable(
"textureVelocity",
this.compFragmentShader(),
dtVelocity
);
var positionVariable = gpuCompute.addVariable(
"texturePosition",
this.compVertexShader(),
dtPosition
);
gpuCompute.setVariableDependencies(
velocityVariable,
[ positionVariable, velocityVariable ]
);
gpuCompute.setVariableDependencies(
positionVariable,
[ positionVariable, velocityVariable ]
);
gpuCompute.init();
gpuCompute.compute();
//TODO: Obtain pixels from resulting texture
I have explored some examples on three.js:
In addition, I came across this stack overflow THREE.js read pixels from GPUComputationRenderer texture but found no concrete solution, just references to webGL.
Could there be something vital that I am overlooking? Kindly offer an example or explain where I may be going wrong