My goal is to pinpoint where a user has clicked on a texture of an object to trigger a response by redrawing the texture.
I've been able to achieve this by rendering my objects with a color-coded texture onto a separate render target and using gl.readPixels to determine which coded pixel was clicked. From there, I calculate the corresponding X and Y coordinates on the texture.
While I can reliably do this for the Y axis, I'm facing some challenges with the X axis.
Here is my simplified Three.js setup:
const canvas = document.getElementByID("output"),
renderer = new THREE.WebGLRenderer({
canvas: canvas,
alpha: true,
antialias: true
}),
back = new THREE.WebGLRenderTarget(canvas.width, canvas.height),
scene = new THREE.Scene(),
pickingScene = new THREE.Scene(),
pickingPixelBuffer = new Uint8Array(4),
camera = new THREE.PerspectiveCamera(50, canvas.width / canvas.height, 0.1, 1000),
textureWidth = 1024,
textureHeight = 1024,
texture = generateTexture(textureWidth, textureHeight),
pickingTexture = generatePickingTexture(textureWidth, textureHeight),
obj = textured(shell(w, 5, 10), texture),
objPicker = textured(shell(w, 5, 10), pickingTexture);
back.generateMipmaps = false;
scene.add(camera);
scene.add(obj);
pickingScene.add(objPicker);
For an object like this:
The picking texture will look like this:
The generateTexture
function isn't crucial. The textured
function simply applies a texture to a geometry object:
function textured(geometry, txt){
const material = new THREE.MeshBasicMaterial({
color: 0xffffff,
map: txt,
transparent: false,
shading: THREE.FlatShading,
side: THREE.DoubleSide
});
const obj = new THREE.Mesh(geometry, material);
return obj;
}
And here is the generatePickingTexture
function:
function generatePickingTexture(w, h){
const canvas = document.createElement("canvas");
canvas.width = w;
canvas.height = h;
const texture = new THREE.Texture(canvas);
const gfx = texture.image.getContext("2d"),
l = w * h,
pixels = gfx.createImageData(w, h);
for(let i = 0, p = 0;
i < l;
++i, p += 4){
pixels.data[p] = (0xff0000 & i) >> 16;
pixels.data[p+1] = (0x00ff00 & i) >> 8;
pixels.data[p+2] = 0x0000ff & i;
pixels.data[p+3] = 0xff;
}
gfx.putImageData(pixels, 0, 0);
texture.needsUpdate = true;
return texture;
}
Next, the picking operation is performed:
function pick(){
renderer.render(pickingScene, camera, back, true);
const gl = renderer.getContext();
gl.readPixels(pointerX, canvas.height - pointerY, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pickingPixelBuffer);
const i = (pickingPixelBuffer[0] << 16) |
(pickingPixelBuffer[1] << 8) |
pickingPixelBuffer[2],
x = (i - Math.floor(textureWidth / 512) * 256) % textureWidth,
y = i / textureWidth;
console.log(x, y);
}
While the y coordinate calculation is accurate, the x coordinate is consistently off. As the mouse is dragged down the screen, the x coordinate shifts to the right, moving approximately 1/4th of the texture width. When the mouse is moved horizontally without vertical changes, the x coordinate aligns with the correct offset but is inaccurately located. It appears to shift positions at every 1/4 mark.
Given that the offset is 1/4th, it suggests that my approach to generating the texture might be flawed. However, I've been unable to pinpoint the issue.
Upon narrowing down my texture to 256 pixels wide, the functionality works flawlessly.
After implementing a workaround in the pick
function, I have resolved the problem, although the reason behind its success eludes me.
Regarding different orientations, unrelated to the X coordinate issue, there are still some remaining challenges that seem linked to texture resampling.
Ultimately, the root cause turned out to be the default texture filtering settings.