Recently, I delved into the realm of mipmapping and its definition, but I find myself uncertain about how to effectively implement this technique in three.js.
After exploring a couple of examples like:
and also this one:
Both examples appear to utilize mipmapping. The first example showcases a section of code:
function mipmap( size, color ) {
var imageCanvas = document.createElement( "canvas" ),
context = imageCanvas.getContext( "2d" );
imageCanvas.width = imageCanvas.height = size;
context.fillStyle = "#444";
context.fillRect( 0, 0, size, size );
context.fillStyle = color;
context.fillRect( 0, 0, size / 2, size / 2 );
context.fillRect( size / 2, size / 2, size / 2, size / 2 );
return imageCanvas;
}
var canvas = mipmap( 128, '#f00' );
var textureCanvas1 = new THREE.CanvasTexture( canvas );
textureCanvas1.mipmaps[ 0 ] = canvas;
textureCanvas1.mipmaps[ 1 ] = mipmap( 64, '#0f0' );
textureCanvas1.mipmaps[ 2 ] = mipmap( 32, '#00f' );
textureCanvas1.mipmaps[ 3 ] = mipmap( 16, '#400' );
textureCanvas1.mipmaps[ 4 ] = mipmap( 8, '#040' );
textureCanvas1.mipmaps[ 5 ] = mipmap( 4, '#004' );
textureCanvas1.mipmaps[ 6 ] = mipmap( 2, '#044' );
textureCanvas1.mipmaps[ 7 ] = mipmap( 1, '#404' );
textureCanvas1.repeat.set( 1000, 1000 );
textureCanvas1.wrapS = THREE.RepeatWrapping;
textureCanvas1.wrapT = THREE.RepeatWrapping;
var textureCanvas2 = textureCanvas1.clone();
textureCanvas2.magFilter = THREE.NearestFilter;
textureCanvas2.minFilter = THREE.NearestMipMapNearestFilter;
materialCanvas1 = new THREE.MeshBasicMaterial( { map: textureCanvas1 } );
materialCanvas2 = new THREE.MeshBasicMaterial( { color: 0xffccaa, map: textureCanvas2 } );
var geometry = new THREE.PlaneBufferGeometry( 100, 100 );
var meshCanvas1 = new THREE.Mesh( geometry, materialCanvas1 );
meshCanvas1.rotation.x = -Math.PI / 2;
meshCanvas1.scale.set(1000, 1000, 1000);
var meshCanvas2 = new THREE.Mesh( geometry, materialCanvas2 );
meshCanvas2.rotation.x = -Math.PI / 2;
meshCanvas2.scale.set( 1000, 1000, 1000 );
My confusion lies with:
textureCanvas1.mipmaps[ 1 ] = mipmap( 64, '#0f0' );
and the utilization of a 2D context.
Given these examples, I am still unsure about how to properly mipmap a planet. Specifically, I would need my sphere to consist of distinct sections so that each piece of the divided texture can be placed on different parts of the sphere. After creating power-of-2 size variations, what comes next?
Hence, my query is, how does mipmapping in three.js manifest when applied to shapes like cubes, spheres, etc.? A simplified demonstration would immensely help as existing examples (which are scarce) sometimes prove too convoluted or lack documentation.
EDIT: Another user on stackoverflow shared the following snippet:
var texture = THREE.ImageUtils.loadTexture( 'images/512.png', undefined, function() {
texture.repeat.set( 1, 1 );
texture.mipmaps[ 0 ] = texture.image;
texture.generateMipmaps = true;
texture.needsUpdate = true;
};
The importance of `texture.mipmaps[]` seems evident here. However, the individual only specified one image. Shouldn't we provide various images and allow the system to determine the appropriate one based on distance? Uncertain about how this mipmapping mechanism functions.