My process of adding a plane to the scene goes like this:
// Setting up Camera
this.three.camera = new THREE.PerspectiveCamera(45, window.innerWidth/window.innerHeight, 0.1, 60);
// Creating Plane
const planeGeometry = new THREE.PlaneBufferGeometry(1,1,this.options.planeSegments,this.options.planeSegments);
const planeMat = new THREE.ShaderMaterial( ... )
this.three.plane = new THREE.Mesh(planeGeometry,planeMat);
this.three.scene.add(this.three.plane);
It's a simple process so far. The next step involves determining how to adjust the position of the plane on the Z axis in order to fill the browser viewport. To accomplish this,
// Referencing attachment "solving for this" - closeZ is calculated
const closeZ = 0.5 / Math.tan((this.three.camera.fov/2.0) * Math.PI / 180.0);
this.uniforms.uZMax = new THREE.Uniform(this.three.camera.position.z - closeZ);
https://i.sstatic.net/7tRku.png
With this information, I can determine how much needs to be added to the Z axis in my shader to make the plane fill the viewport. Here is what the Vertex Shader looks like:
uniform float uZMax;
void main() {
vec3 pos = (position.xy, uZMax);
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos, 1 );
}
The current setup zooms the plane to fill the viewport, but it impacts the Y-Axis, not the X-Axis.
https://i.sstatic.net/D6Zjs.png
I am eager to discover why my calculations are influencing the Y-Axis and how I can adjust them to ensure that the plane fills the width of the viewport instead of just its height?
Edit:
The goal I'm striving for is similar to what is demonstrated in this example. However, in the provided example, they simply scale the x- and y-pixels to occupy the screen without true 3D rendering or lighting effects.
I aim to physically move the plane closer to the camera by adjusting z-values, allowing me to calculate surface normals and eventually incorporate lighting in the fragment shader based on the alignment of these normals with the direction of light - akin to raymarching techniques.