/** * @file * * Summary. *
Equirectangular and Mercator projection viewer using lighting combined with * {@link https://web.engr.oregonstate.edu/~mjb/cs550/PDFs/TextureMapping.4pp.pdf texture mapping} * written in Vanilla Javascript and WebGL.
* *For educational purposes only.
*This is just a demo for teaching {@link https://en.wikipedia.org/wiki/Computer_graphics CG}, * which became {@link https://www.youtube.com/watch?v=uhiCFdWeQfA overly complicated}, * and it is similar to Lighting2, * except we define a 3x3 matrix for {@link https://learnopengl.com/Lighting/Materials material properties} * and a 3x3 matrix for {@link https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Lighting_in_WebGL light properties} * that are passed to the fragment shader as * {@link https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/uniform uniforms}. * * Edit the {@link lightPropElements light} and {@link matPropElements material} matrices in the global variables to experiment or * {@link startForReal} to choose a model and select * {@link https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/shading-normals face or vertex normals}. * {@link https://threejs.org Three.js} only uses face normals for * {@link https://threejs.org/docs/#api/en/geometries/PolyhedronGeometry polyhedra}, indeed.
* * Texture coordinates can be set in each model or {@link https://gamedev.stackexchange.com/questions/197931/how-can-i-correctly-map-a-texture-onto-a-sphere sampled at each pixel} * in the {@link https://raw.githubusercontent.com/krotalias/cwdc/main/13-webgl/extras/LightingWithTexture.html fragment shader}. * We can also approximate a sphere by subdividing a * {@link https://en.wikipedia.org/wiki/Regular_polyhedron convex regular polyhedron} and solving Mipmapping artifact issues * by using {@link https://vcg.isti.cnr.it/Publications/2012/Tar12/jgt_tarini.pdf Tarini's} method, in this case. * These {@link https://bgolus.medium.com/distinctive-derivative-differences-cce38d36797b artifacts} * show up due to the discontinuity in the seam when crossing the line with 0 radians on one side and 2π on the other. * Some triangles may have edges that cross this line, causing the wrong mipmap level 0 to be chosen. * *
To lay a map onto a sphere, textures should have an aspect ratio of 2:1 for equirectangular projections * or 1:1 (squared) for Mercator projections. Finding high-resolution, good-quality, * and free {@link https://www.axismaps.com/guide/map-projections cartographic maps} * is really difficult.
* *The initial position on the screen takes into account the {@link https://science.nasa.gov/science-research/earth-science/milankovitch-orbital-cycles-and-their-role-in-earths-climate/ obliquity} * of the earth ({@link viewMatrix 23.44°}), and the {@link https://en.wikipedia.org/wiki/Phong_reflection_model Phong highlight} * projects onto the {@link https://en.wikipedia.org/wiki/Equator equator line} * if the user has not interacted using the {@link http://courses.cms.caltech.edu/cs171/assignments/hw3/hw3-notes/notes-hw3.html#NotesSection2 Arcball}. * If {@link https://www.php.net PHP} is running on the {@link https://developer.mozilla.org/en-US/docs/Learn/Common_questions/Web_mechanics/What_is_a_web_server HTTP server}, * then any image file in directory textures * will be available in the {@link readFileNames menu}. Otherwise, sorry {@link https://pages.github.com GitHub pages}, * only the images listed in the HTML file.
* *
Maps are transformations from {@link module:polyhedron.cartesian2Spherical 3D space} * to {@link module:polyhedron.spherical2Mercator 2D space}, and they can preserve areas * ({@link https://en.wikipedia.org/wiki/Equal-area_projection equal-area maps}) or angles * ({@link https://en.wikipedia.org/wiki/Conformal_map conformal maps}). The success of the * {@link https://en.wikipedia.org/wiki/Mercator_projection Mercator projection} * lies in its ability to preserve angles, making it ideal for navigation * (directions on the map match the directions on the compass). However, it distorts areas, * especially near the poles, where landmasses appear {@link https://math.uit.no/ansatte/dennis/MoMS2017-Lec3.pdf much larger} * than they are in reality. Meridian and parallel {@link https://en.wikipedia.org/wiki/Scale_(map) scales} * are the same, meaning that distances along a parallel or meridian (in fact, in all directions) are equally stretched * by a factor of sec(φ) = 1/cos(φ), where φ ∈ [-85.051129°, 85.051129°] is its latitude.
* *The {@link https://en.wikipedia.org/wiki/Web_Mercator_projection Web Mercator} * projection, on the other hand, is a variant of the Mercator projection, which is {@link https://en.wikipedia.org/wiki/Google_Maps widely} * used in {@link https://en.wikipedia.org/wiki/Web_mapping web mapping} applications. * It was designed to work well with the Web Mercator coordinate system, * which is based on the {@link https://en.wikipedia.org/wiki/World_Geodetic_System#WGS_84 WGS 84 datum}. * * The projection is neither strictly ellipsoidal nor strictly spherical, * and it uses spherical development of ellipsoidal coordinates. * The underlying geographic coordinates are defined using the WGS 84 ellipsoidal model * of the Earth's surface but are projected as if * {@link https://alastaira.wordpress.com/2011/01/23/the-google-maps-bing-maps-spherical-mercator-projection/ defined on a sphere}. * * Misinterpreting Web Mercator for the standard Mercator during coordinate conversion can lead to * {@link https://web.archive.org/web/20170329065451/https://earth-info.nga.mil/GandG/wgs84/web_mercator/index.html deviations} * as much as 40 km on the ground.
* *It is impressive how {@link https://en.wikipedia.org/wiki/Gerardus_Mercator Gerardus Mercator} was able to create such a projection in a * {@link https://personal.math.ubc.ca/~israel/m103/mercator/mercator.html time} (1569) when there was no * calculus (integrals, derivatives — {@link https://en.wikipedia.org/wiki/History_of_calculus Leibniz-Newton}, 1674-1666) or even logarithm tables * ({@link https://en.wikipedia.org/wiki/John_Napier John Napier}, 1614).
* * {@link https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/mercator-its-not-hip-to-be-square/ Mercator texture coordinates} * can be set in a {@link module:polyhedron.setMercatorCoordinates model} directly or in * the shader * that samples texture coordinates for each pixel. * * Since a unit sphere fits in the WebGL {@link https://carmencincotti.com/2022-11-28/from-clip-space-to-ndc-space/ NDC space}, * it is possible to go into each fragment from: *Its meridian scale is 1 meaning that the distance along lines of longitude * remains the same across the map, while its parallel * {@link https://en.wikipedia.org/wiki/Scale_(map) scale} varies with latitude, * which means that the distance along lines of latitude is stretched by a factor of sec(φ). * * {@link https://en.wikipedia.org/wiki/Ptolemy Ptolemy} claims that * {@link https://en.wikipedia.org/wiki/Marinus_of_Tyre Marinus of Tyre} * invented the projection in the first century (AD 100). * The projection is neither equal area nor conformal. * In particular, the plate carrée (flat square) has become a standard for * {@link https://gisgeography.com/best-free-gis-data-sources-raster-vector/ global raster datasets}.
* *As a final remark, I thought it would be easier to deal with map images as textures, but I was mistaken. I tried, as long as I could, * not to rewrite third-party code. Unfortunately, this was impossible. The main issue was that the * {@link https://en.wikipedia.org/wiki/Prime_meridian prime meridian} is * at the center of a map image and not at its border, which corresponds to * its {@link https://en.wikipedia.org/wiki/180th_meridian antimeridian}.
* *Initially, I used the {@link https://math.hws.edu/graphicsbook/demos/script/basic-object-models-IFS.js basic-object-models-IFS} package, * but the models had their z-axis pointing up as the {@link https://en.wikipedia.org/wiki/Zenith zenith}, * and I wanted the y-axis to be the north pole (up). * Therefore, I switched to {@link getModelData Three.js}, and almost everything * worked just fine. Nonetheless, a sphere created by subdividing a {@link https://threejs.org/docs/#api/en/geometries/PolyhedronGeometry polyhedron} * had its {@link module:polyhedron.rotateUTexture texture coordinates} rotated by 180° * and a cylinder or cone by 90°. In fact, there is a poorly documented parameter, * {@link https://threejs.org/docs/#api/en/geometries/ConeGeometry thetaStart}, that does fix just that.
* *Nevertheless, I decided to adapt the {@link https://math.hws.edu/graphicsbook/ hws} software * to my needs by introducing a global hook, {@link yNorth}, * and {@link setNorth rotating} the models accordingly. Furthermore, I added the parameter stacks to {@link uvCone} and {@link uvCylinder}, * to improve interpolation and fixed the number of triangles generated in uvCone. This way, the set of models in hws and * three.js became quite similar, although I kept the "zig-zag" mesh for cones and cylinders in hws * (I have no idea whether it provides any practical advantage). * A user can switch between hws and three.js models by pressing a single key (Alt, ❖ or ⌘) in the interface.
* *There is a lot of redundancy in the form of {@link https://stackoverflow.com/questions/36179507/three-js-spherebuffergeometry-why-so-many-vertices vertex duplication} * in all of these models that precludes mipmapping artifacts. * The theoretical number of vertices, 𝑣, for a {@link https://en.wikipedia.org/wiki/Manifold manifold model} * and the actual number of vertices (🔴) are {@link createModel displayed} in the interface. * The number of edges, e, is simply three times the number of triangles, t, divided by two.
* * For any triangulation of a {@link https://en.wikipedia.org/wiki/Surface_(topology) compact surface}, * the following holds (page=52): *As a proof of concept, I implemented a {@link uvSphereND sphere} model without any vertex duplication. * Besides being much harder to code, its last slice (e.g., slices = 48) goes from 6.152285613280011 (2π/48 * 47) to 0.0 * and not 2π (if there was an extra duplicate vertex), which generates texture coordinates * going from 0.9791666666666666 (47/48) to 0.0 and not 1.0. * Although this discontinuity is what causes the mipmapping artifacts, it has nothing to do with the topology of the model * but how mipmapping is {@link https://developer.nvidia.com/gpugems/gpugems2/part-iii-high-quality-rendering/chapter-28-mipmap-level-measurement implemented} on the GPU. * * However, since an entire line is mapped onto the vextex at the north or south pole, * and a vertex can have only one pair of texture coordinates (u,v), * no matter what value we use for the "u" coordinate (e.g., 0 or 0.5), * the interpolation will produce an awkward swirl effect at the poles.
* * Of course, these are just {@link https://en.wikipedia.org/wiki/Polygon_mesh polygon meshes} suitable for visualization * and not valid topological {@link https://en.wikipedia.org/wiki/Boundary_representation B-rep} * models that enforce the {@link https://www.britannica.com/science/Euler-characteristic Euler characteristic} * by using the {@link https://people.computing.clemson.edu/~dhouse/courses/405/papers/p589-baumgart.pdf winged-edge}, * {@link https://dl.acm.org/doi/pdf/10.1145/282918.282923 quad-edge}, * or radial-edge data structures required in * {@link https://www.sciencedirect.com/science/article/abs/pii/S0010448596000668?via%3Dihub solid modeling}. * *{@link https://www.youtube.com/watch?v=Otm4RusESNU Homework}:
* *Canvas element and its tooltip.
*Canvas is used for drawing the globe and its tooltip is used for displaying * the GCS coordinates (longitude and latitude) on the globe when pointer is moved upon.
*Canvas is a bitmap element that can be used to draw graphics on the fly via scripting (usually JavaScript). * It is a part of the HTML5 specification and is supported by all modern browsers.
*Tooltip is a small pop-up box that appears when the user hovers over an element. * It is used to provide additional information about the element, such as its coordinates.
*Both canvas and tooltip are used to provide a better user experience * by allowing the user to interact with the globe and see its coordinates.
* @type {HTMLCanvasElement} */ const canvas = document.getElementById("theCanvas"); /** *Tooltip element for displaying GCS coordinates on the globe.
*Tooltip is a small pop-up box that appears when the user hovers over * an element. It is used to provide additional information about the element, * such as its coordinates.
*Tooltip is used to provide a better user experience by allowing the user * to interact with the globe and see its coordinates.
* @type {HTMLElement} */ const canvastip = document.getElementById("canvastip"); /** * HTML elements in the interface. * @type {Object} * @property {HTMLInputElement} mesh checkbox * @property {HTMLInputElement} axes radio * @property {HTMLInputElement} equator checkbox * @property {HTMLInputElement} hws checkbox * @property {HTMLInputElement} fix_uv checkbox * @property {HTMLInputElement} merc checkbox * @property {HTMLInputElement} cull checkbox * @property {HTMLInputElement} texture checkbox * @property {HTMLSelectElement} textures select * @property {HTMLSelectElement} models select * @property {HTMLImageElement} textimg img * @property {HTMLInputElement} tooltip checkbox * @property {HTMLInputElement} tip checkbox * @property {HTMLInputElement} php checkbox * @property {HTMLButtonElement} closest button * @property {HTMLButtonElement} animation button * @property {HTMLInputElement} byDate checkbox * @property {HTMLInputElement} locations checkbox * @property {HTMLInputElement} timeline range * @property {HTMLLabelElement} lblTimeline label * @property {HTMLDataListElement} steplist list * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement HTMLElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement HTMLInputElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLSelectElement HTMLSelectElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement HTMLImageElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement HTMLCanvasElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLLabelElement HTMLLabelElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLDataListElement HTMLDataListElement} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLButtonElement HTMLButtonElement} */ const element = { mesh: document.getElementById("mesh"), axes: document.getElementById("axes"), equator: document.getElementById("equator"), hws: document.getElementById("hws"), fix_uv: document.getElementById("fixuv"), merc: document.getElementById("mercator"), cull: document.getElementById("culling"), texture: document.getElementById("texture"), textures: document.getElementById("textures"), models: document.getElementById("models"), textimg: document.getElementById("textimg"), tooltip: document.getElementById("tooltip"), tip: document.getElementById("tip"), php: document.getElementById("php"), print: document.getElementById("print"), closest: document.getElementById("cls"), animation: document.getElementById("anim"), byDate: document.getElementById("cities"), locations: document.getElementById("locs"), timeline: document.getElementById("timeline"), lblTimeline: document.getElementById("lblTimeline"), steplist: document.getElementById("steplist"), }; /** * Convert spherical coordinates to {@link GCS} * (longitude, latitude). * @param {Object<{s:Number,t:Number}>} uv spherical coordinates ∈ [0,1]}. * @return {Object<{longitude: Number, latitude: Number}>} longitude ∈ [-180°,180°], latitude ∈ [-90°,90°]. * @function */ const spherical2gcs = (uv) => { // Convert UV coordinates to longitude and latitude return { longitude: uv.s * 360 - 180, latitude: uv.t * 180 - 90, }; }; /** * Convert from {@link GCS} * (longitude, latitude) to UV coordinates. * @param {GCS} gcs longitude ∈ [-180°,180°], latitude ∈ [-90°,90°]. * @return {Object<{s: Number, t: Number}>} UV coordinates ∈ [0,1]. * @function */ const gcs2UV = (gcs) => { // Convert longitude and latitude to UV coordinates. return { s: (gcs.longitude + 180) / 360, t: (gcs.latitude + 90) / 180, }; }; /** * Convert from {@link https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Using_textures_in_WebGL UV coordinates} * (s, t) to {@link https://en.wikipedia.org/wiki/Spherical_coordinate_system spherical coordinates}. * @param {Object<{s: Number,t:Number}>} uv ∈ [0,1]. * @return {Array<{Number, Number}>} spherical coordinates ∈ [0,2π] x [0,π]. * @function */ const UV2Spherical = (uv) => { return [uv.s * 2 * Math.PI, -uv.t * Math.PI]; }; /** *Calculate distances on the globe using the Haversine Formula.
* Usage: ** const distance = haversine( * gpsCoordinates["Alexandria"], * gpsCoordinates["Aswan"], * ); * console.log(`Distance: ${Math.round(distance.m, 3)} m`); * console.log(`Distance: ${Math.round(distance.km, 3)} km`); * * >> Distance: 843754 m * >> Distance: 844 km ** @param {GCS} gcs1 first pair of gcs coordinates. * @param {GCS} gcs2 second pair of gcs coordinates. * @return {Number} distance between gcs1 and gcs2. * @see {@link https://en.wikipedia.org/wiki/Haversine_formula Haversine formula} * @see {@link https://community.esri.com/t5/coordinate-reference-systems-blog/distance-on-a-sphere-the-haversine-formula/ba-p/902128 Distance on a sphere: The Haversine Formula} * @see {@link https://www.distancecalculator.net/from-alexandria-to-aswan Distance from Alexandria to Aswan} */ function haversine(gcs1, gcs2) { // Coordinates in decimal degrees (e.g. 2.89078, 12.79797) const { latitude: lat1, longitude: lon1 } = gcs1; const { latitude: lat2, longitude: lon2 } = gcs2; const R = 6371000; // radius of Earth in meters const phi_1 = toRadian(lat1); const phi_2 = toRadian(lat2); const delta_phi = toRadian(lat2 - lat1); const delta_lambda = toRadian(lon2 - lon1); const a = Math.sin(delta_phi / 2.0) ** 2 + Math.cos(phi_1) * Math.cos(phi_2) * Math.sin(delta_lambda / 2.0) ** 2; const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); const m = R * c; // distance in meters const km = m / 1000.0; // distance in kilometers return { m, km }; } /** * Three.js module. * @author Ricardo Cabello ({@link https://coopermrdoob.weebly.com/ Mr.doob}) * @since 24/04/2010 * @license Licensed under the {@link https://www.opensource.org/licenses/mit-license.php MIT license} * @external three * @see {@link https://threejs.org/docs/#manual/en/introduction/Installation Installation} * @see {@link https://discoverthreejs.com DISCOVER three.js} * @see {@link https://riptutorial.com/ebook/three-js Learning three.js} * @see {@link https://github.com/mrdoob/three.js github} * @see {@link http://cindyhwang.github.io/interactive-design/Mrdoob/index.html An interview with Mr.doob} * @see {@link https://experiments.withgoogle.com/search?q=Mr.doob Experiments with Google} * @see Notes */ /** *
Main three.js namespace.
* Imported from {@link external:three three.module.js} * * @namespace THREE */ /** *A representation of mesh, line, or point geometry.
* Includes vertex positions, face indices, normals, colors, UVs, * and custom attributes within buffers, reducing the cost of * passing all this data to the GPU. * @class BufferGeometry * @memberof THREE * @see {@link https://threejs.org/docs/#api/en/core/BufferGeometry BufferGeometry} */ // default texture const defTexture = document .getElementById("textures") .querySelector("[selected]"); /** * Array holding image file names to create textures from. * @type {ArrayA set of world locations given by their GPS coordinates.
* These locations are {@link event:load read} from a json file. * @type {ObjectLight properties.
* Ambient, diffuse and specular. *Remember this is column major.
* @type {ObjectMaterial properties.
* Ambient, diffuse and specular. *Remember this is column major.
* @type {ObjectSpecular term exponent used in the * {@link https://en.wikipedia.org/wiki/Phong_reflection_model Phong reflection model}.
* One entry for each material property. * @type {ArrayLight Position.
* Phong illumination model will highlight * the projection of this position * on the current model. *In the case of a sphere, it will trace the equator, * if no other rotation is applied by the user.
* @type {ArrayDecomposes vector v into components parallel and perpendicular to w.
* The projection and perpendicular component are given by: *Promise for returning an array with all file names in directory './textures'.
* *Since php runs on the server, and javascript on the browser, * a php script is invoked asynchronously via ajax, because Javascript doesn't * have access to the filesystem.
* *The JavaScript Fetch API provides a modern, promise-based interface for making * network requests, such as fetching data from an API. * It is designed to replace older methods like XMLHttpRequest and offers a more * streamlined way to handle asynchronous operations.
* * The Response object provides methods to parse the response body in various formats, * such as json(), text(), blob(), arrayBuffer(), and formData(). * * @type {PromiseMatrix for taking normals into eye space.
* Return a matrix to transform normals, so they stay * perpendicular to surfaces after a linear transformation. * @param {mat4} model model matrix. * @param {mat4} view view matrix. * @returns {mat3} (𝑀–1)𝑇 - 3x3 normal matrix (transpose inverse) from the 4x4 modelview matrix. * @see 𝑛′=(𝑀–1)𝑇⋅𝑛 */ function makeNormalMatrixElements(model, view) { const modelview = mat4.multiply([], view, model); return mat3.normalFromMat4([], modelview); } /** * Translate keydown events to strings. * @param {KeyboardEvent} event keyboard event. * @return {String | null} * @see http://javascript.info/tutorial/keyboard-events */ function getChar(event) { event = event || window.event; const charCode = event.key || String.fromCharCode(event.which); return charCode; } /** * Checks if the given texture file name represents a map. * It looks for the substrings "map", "earth", "ndvi" or "ocean" * in the file name. The check is case insensitive. * @param {String} filename texture fine name. * @returns {Boolean} whether the texture represents a map. */ function checkForMapTexture(filename) { return ["map", "earth", "ndvi", "ocean"].some((str) => filename.toLowerCase().includes(str), ); } /** * Cleans the location name by removing * the text in parentheses and the parentheses themselves. * @param {String} location name of the location. * @return {String} cleaned location name. */ const cleanLocation = (location) => location.replace(/\(.*?\)/g, "").replace("_", " "); /** * Updates the label (latitude, longitude and secant) * of the given {@link gpsCoordinates location}. * @param {String} location name of the location. */ function labelForLocation(location) { const lat = gpsCoordinates[location].latitude; const lon = gpsCoordinates[location].longitude; const sec = 1 / Math.cos(toRadian(lat)); const distance = haversine( gpsCoordinates[location], gpsCoordinates["Rio"], ).km; document.querySelector('label[for="equator"]').innerHTML = `${cleanLocation(location)} (lat: ${lat.toFixed(5)}°, lon: ${lon.toFixed(5)}°), sec(lat): ${sec.toFixed( 2, )}Convert from {@link GCS} * (longitude, latitude) to screen coordinates.
* This function uses the {@link project WebGL projection} * to convert the geographic coordinates to screen coordinates (pixels). *Closure for keydown events.
* Chooses a {@link theModel model} and which {@link axis} to rotate around.Handler for keydown events.
* @param {KeyboardEvent} event keyboard event. * @callback key_event callback to handle a key pressed. */ return (event) => { const ch = getChar(event); switch (ch) { case "m": case "M": inc = ch == "m" ? 1 : -1; numSubdivisions = mod(numSubdivisions + inc, maxSubdivisions + 1); gscale = mscale = 1; if (numSubdivisions == 0) { element.models.value = (subPoly + 9).toString(); } else { element.models.value = "13"; } theModel = createModel({ poly: subPoly }); tri = theModel.ntri(numSubdivisions); kbd.innerHTML = ` (${theModel.name} level ${theModel.level(tri)} → ${tri} triangles):`; break; case " ": selector.paused = !selector.paused; pause.checked = selector.paused; if (axis === " ") axis = "y"; if (!selector.paused) document.getElementById(axis).checked = true; animate(); return; case "l": selector.lines = !selector.lines; if (!selector.lines) selector.texture = true; element.mesh.checked = selector.lines; element.texture.checked = selector.texture; break; case "L": selector.locations = !selector.locations; element.locations.checked = selector.locations; break; case "k": selector.texture = !selector.texture; if (!selector.texture) selector.lines = true; element.texture.checked = selector.texture; element.mesh.checked = selector.lines; break; case "A": if (selector.paused) { if (animationID) { clearInterval(animationID); animationID = null; } else { animationID = startAnimation(); } } break; case "a": selector.axes = !selector.axes; element.axes.checked = selector.axes; break; case "x": case "y": case "z": case "q": axis = ch; canvas.style.cursor = "crosshair"; if (axis == "q") { canvas.style.cursor = "wait"; if (isTouchDevice()) { updateCurrentMeridian(...phongHighlight); } else { updateCurrentMeridian(cursorPosition.x, cursorPosition.y); } } selector.paused = false; document.getElementById(axis).checked = true; animate(); break; case "I": selector.intrinsic = true; document.getElementById("intrinsic").checked = true; animate(); break; case "e": selector.intrinsic = false; document.getElementById("extrinsic").checked = true; animate(); break; case "E": selector.equator = !selector.equator; element.equator.checked = selector.equator; animate(); break; case "Z": gscale = mscale = 1; element.models.value = "5"; n = numSubdivisions; numSubdivisions = 1; theModel = createModel({ shape: uvSphereND(1, 48, 24), name: "spherend", }); numSubdivisions = n; break; case "s": gscale = mscale = 1; element.models.value = "5"; theModel = createModel({ shape: selector.hws ? uvSphere(1, 48, 24) : getModelData(new THREE.SphereGeometry(1, 48, 24)), name: "sphere", }); break; case "S": // subdivision sphere gscale = mscale = 1; element.models.value = "13"; numSubdivisions = maxSubdivisions; theModel = createModel({ poly: subPoly }); tri = theModel.ntri(numSubdivisions); kbd.innerHTML = ` (${theModel.name} level ${theModel.level(tri)} → ${tri} triangles):`; break; case "T": // (2,3)-torus knot (trefoil knot). // The genus of a torus knot is (p−1)(q−1)/2. gscale = mscale = 0.6; element.models.value = "8"; theModel = createModel({ shape: getModelData(new THREE.TorusKnotGeometry(1, 0.4, 128, 16)), name: "torusknot", chi: 1, }); break; case "t": gscale = mscale = 1; element.models.value = "7"; theModel = createModel({ shape: selector.hws ? uvTorus(1, 0.5, 30, 30) : getModelData(new THREE.TorusGeometry(0.75, 0.25, 30, 30)), name: "torus", chi: 0, }); break; case "u": // capsule from threejs gscale = mscale = 1.2; element.models.value = "0"; theModel = createModel({ shape: getModelData(new THREE.CapsuleGeometry(0.5, 0.5, 10, 20)), name: "capsule", }); break; case "c": gscale = mscale = 1; element.models.value = "3"; const r = mercator ? 3 / 8 : 9 / 16; const length = 2 * Math.PI * r; let height = mercator ? length : length / 2; if (noTexture) height -= r; theModel = createModel({ shape: selector.hws ? uvCylinder(r, height, 30, 5, false, false) : getModelData( new THREE.CylinderGeometry( r, r, height, 30, 5, false, -Math.PI / 2, ), ), name: "cylinder", }); break; case "C": gscale = mscale = 0.8; element.models.value = "1"; theModel = createModel({ shape: selector.hws ? uvCone(1, 2, 30, 5, false) : getModelData( new THREE.ConeGeometry(1, 2, 30, 5, false, -Math.PI / 2), ), name: "cone", }); break; case "v": gscale = mscale = 0.6; element.models.value = "2"; theModel = createModel({ shape: selector.hws ? cube(2) : getModelData(new THREE.BoxGeometry(2, 2, 2)), name: "cube", }); break; case "p": // teapot - this is NOT a manifold model - it is a model with borders! gscale = mscale = selector.hws ? 0.09 : 0.7; element.models.value = "6"; theModel = createModel({ shape: selector.hws ? teapotModel : getModelData( new TeapotGeometry(1, 10, true, true, true, true, true), ), name: "teapot", chi: null, }); break; case "d": case "i": case "o": case "w": gscale = mscale = 1; subPoly = poly[ch]; numSubdivisions = 0; element.models.value = (subPoly + 9).toString(); theModel = createModel({ poly: subPoly }); kbd.innerHTML = ":"; break; case "r": gscale = mscale = 1.0; element.models.value = "4"; const segments = 30; theModel = createModel({ shape: selector.hws ? ring(0.3, 1.0, segments) : getModelData( new THREE.RingGeometry(0.3, 1.0, segments, 1, 0, 2 * Math.PI), ), name: "ring", chi: segments, }); break; case "n": case "N": const incr = ch == "n" ? 1 : -1; textureCnt = mod(textureCnt + incr, imageFilename.length); selectTexture(false); return; case "f": fixuv = !fixuv; // reload texture with or without fixing image.src = `./textures/${imageFilename[textureCnt]}`; element.fix_uv.checked = fixuv; setUVfix(); break; case "K": mercator = !mercator; element.merc.checked = mercator; break; case "b": culling = !culling; if (culling) gl.enable(gl.CULL_FACE); else gl.disable(gl.CULL_FACE); element.cull.checked = culling; break; case "ArrowUp": mscale *= zoomfactor; mscale = Math.max(gscale * 0.1, mscale); break; case "ArrowDown": mscale /= zoomfactor; mscale = Math.min(gscale * 3, mscale); break; case "Meta": case "Alt": selector.hws = !selector.hws; element.hws.checked = selector.hws; break; case "W": selector.cities = !selector.cities; element.byDate.checked = selector.cities; cities.current = element.byDate.checked ? cities.byDate : cities.byLongitude; break; case "X": const date = +element.timeline.value; let dt; for (dt of cities.timeline) { if (dt >= date) break; } const index = cities.timeline.indexOf(dt); currentLocation = cities.current[index]; labelForTimeline(dt); updateLocation(0); break; case "R": currentLocation = "Rio"; case "O": mat4.identity(modelMatrix); rotator.setViewMatrix(modelMatrix); mat4.identity(modelM); // model matrix vec3.set(forwardVector, 0, 0, 1); // phong highlight mscale = gscale; updateLocation(0); break; case "J": currentLocation = closestSite(gpsCoordinates["Unknown"]); case "j": updateLocation(0); break; case "g": case "ArrowRight": updateLocation(1); break; case "G": case "ArrowLeft": updateLocation(-1); break; case "D": canvas.toBlob((blob) => { saveWebGLCanvasAsPNG( blob, `WebGL_Globe-${canvas.width}x${canvas.height}.png`, ); }); break; case "B": case "U": const sign = ch == "U" ? -1 : 1; const rotF = rotateGlobeAroundAxis( [], (sign * Math.PI) / 6, forwardVector, ); mat4.multiply(modelM, modelM, rotF); updateLocation(0, false); break; case "Q": const rotY = setYUp([], modelM, forwardVector); mat4.multiply(modelM, modelM, rotY); updateLocation(0); break; case "h": selector.tooltip = !selector.tooltip; element.tip.checked = selector.tooltip; if (!selector.tooltip) { element.tooltip.style.display = "none"; canvastip.style.display = "none"; } else { element.tooltip.style.display = "block"; canvastip.style.display = "block"; } break; default: return; } opt.innerHTML = `${gl.getParameter( gl.SHADING_LANGUAGE_VERSION, )}Scalar triple product of three vectors.
* * The absolute value of the scalar triple product represents * the volume of the parallelepiped formed by the three vectors * a, b, and c when originating from the same point. * *The sign of the result indicates the orientation of the vectors * (whether they form a right-handed or left-handed system). * If the scalar triple product is zero, * it means the three vectors are coplanar (lie in the same plane).
* @param {vec3} a first vector. * @param {vec3} b second vector. * @param {vec3} c third vector. * @returns {Number} a⋅(b×c) * @see Scalar triple product */ function scalarTripleProduct(a, b, c) { return vec3.dot(a, vec3.cross([], b, c)); } /** * Clamp a value between a minimum and maximum value. * @param {Number} value value to be clamped. * @param {Number} min minimum value. * @param {Number} max maximum value. * @returns {Number} clamped value. */ function clamp(value, min, max) { return Math.min(Math.max(value, min), max); } /** *Calculate the angle in radians between two vectors.
** const dotProduct = clamp(vec3.dot(v1, v2), -1, 1)/(vec3.length(v1)*vec3.length(v2)); * const angleInRadians = Math.acos(dotProduct); ** @param {vec3} v1 first vector. * @param {vec3} v2 second vector. * @returns {Number} angle in radians between v1 and v2. * @see {@link https://www.quora.com/How-do-I-calculate-the-angle-between-two-vectors-in-3D-space-using-atan2 How do I calculate the angle between two vectors in 3D space using atan2?} */ function getAngleBetweenVectors(v1, v2) { // Calculate cross product const c = vec3.create(); vec3.cross(c, v1, v2); // Calculate the dot product const dotProduct = vec3.dot(v1, v2); const angleInRadians = Math.atan2(vec3.length(c), dotProduct); return angleInRadians; } /** * Rotate the model around a given axis so that its 'north' vector aligns * with the screen y axis after applying the given rotation matrix. * @param {mat4} out the receiving matrix. * @param {mat4} rotationMatrix transformation matrix applied to model. * @param {vec3} rotationAxis rotation axis. * @returns {mat4} out. * @see {@link https://stackoverflow.com/questions/15022630/how-to-calculate-the-angle-from-rotation-matrix How to calculate the angle from rotation matrix} */ function setYUp(out, rotationMatrix, rotationAxis) { const north = vec3.fromValues(0, 1, 0); const up = vec3.transformMat4([], north, mat4.invert([], rotationMatrix)); const d = decomposeVector(north, rotationAxis).perp; // Angle onto the plane perpendicular to rotationAxis let angle = getAngleBetweenVectors(d, up); const tripleProd = scalarTripleProduct(rotationAxis, d, up); if (tripleProd < 0) { angle = -angle; } rotateGlobeAroundAxis(out, angle, rotationAxis); return out; } /** * Rotate the globe around a given axis by a given angle. * @param {mat4} out the receiving matrix. * @param {Number} angle angle in radians. * @param {vec3} axis rotation axis. * @returns {mat4} out. */ function rotateGlobeAroundAxis(out, angle, axis) { if (Math.abs(angle) < 1e-5) { // No significant rotation needed return mat4.identity(out); } else { mat4.fromRotation(out, angle, axis); } return out; } /** * Rotate the model towards a given vector. * @param {mat4} out the receiving matrix. * @param {vec3} modelPosition a model's vector in world coordinates. * @param {vec3} modelForward model's forward vector in world coordinates. * @returns {mat4} out. */ function rotateModelTowardsCamera( out, modelPosition, modelForward = vec3.fromValues(0, 0, 1), ) { // Calculate rotation axis (cross product) const rotationAxis = vec3.create(); vec3.cross(rotationAxis, modelPosition, modelForward); const angle = getAngleBetweenVectors(modelPosition, modelForward); rotateGlobeAroundAxis(out, angle, rotationAxis); // Return the rotation matrix return out; } /** * Draw the meridian and parallel lines at the {@link currentLocation} * on the texture image. */ function drawLinesOnImage() { const canvasimg = document.getElementById("canvasimg"); const ctx = canvasimg.getContext("2d"); ctx.clearRect(0, 0, canvasimg.width, canvasimg.height); if (selector.equator) { const uv = gcs2UV(gpsCoordinates[currentLocation]); uv.t = 1 - uv.t; if (mercator) { // mercator projection uv.t = spherical2Mercator(uv.s, uv.t).y; } // screen coordinates const x = uv.s * canvasimg.width; const y = uv.t * canvasimg.height; ctx.beginPath(); ctx.moveTo(x, 0); // meridian ctx.lineTo(x, canvasimg.height); ctx.moveTo(0, y); // parallel ctx.lineTo(canvasimg.width, y); ctx.strokeStyle = "red"; ctx.stroke(); ctx.closePath(); } } /** * Draw the {@link gpsCoordinates} locations * on the texture image. */ function drawLocationsOnImage() { const canvasimg = document.getElementById("canvasimg"); const ctx = canvasimg.getContext("2d"); // ctx.clearRect(0, 0, canvasimg.width, canvasimg.height); for (const location of cities.current) { const gps = gpsCoordinates[location]; const uv = gcs2UV(gps); uv.t = 1 - uv.t; if (mercator) { // mercator projection uv.t = spherical2Mercator(uv.s, uv.t).y; } // screen coordinates const x = uv.s * canvasimg.width; const y = uv.t * canvasimg.height; ctx.beginPath(); ctx.arc(x, y, 2, 0, Math.PI * 2); ctx.fillStyle = location === "Unknown" ? "blue" : (ctx.fillStyle = gps.remarkable.at(-1).includes(" BC") ? "yellow" : "red"); ctx.fill(); ctx.closePath(); } } /** *
Closure for selecting a texture from the menu.
* Tetrahedra and octahedra may need to be reloaded for * getting appropriate texture coordinates: *Transforms object space coordinates into screen coordinates.
* @param {ArrayTransforms screen coordinates into object space coordinates.
* @param {ArrayFind point of intersection between a line and a sphere.
* The line is defined by its origin and an end point. * The sphere is defined by its center and radius. * @param {vec3} o ray origin. * @param {vec3} p ray end point. * @param {vec3} c center of the sphere. * @param {Number} r radius of the sphere. * @returns {vec3|null} intersection point or null, if there is no intersection. * @see {@link https://en.wikipedia.org/wiki/Line–sphere_intersection Line–sphere intersection} */ function lineSphereIntersection(o, p, c, r) { // line direction const u = vec3.normalize([], vec3.subtract([], p, o)); // ||p - o|| const oc = vec3.subtract([], o, c); // o - c const a = vec3.dot(u, oc); const b = vec3.dot(oc, oc); // ||oc||^2 const delta = a * a - b + r * r; let dist; if (delta > 0) { const sqrt_delta = Math.sqrt(delta); const d1 = -a + sqrt_delta; const d2 = -a - sqrt_delta; dist = Math.min(d1, d2); } else if (delta == 0) { dist = -a; } else { // no intersection return null; } return vec3.scaleAndAdd([], o, u, dist); // o + u * dist } /** * Select next texture and creates an {@link createEvent event} "n" for it. */ function nextTexture() { handleKeyPress(createEvent("n")); } /** * Select previous texture and creates an {@link createEvent event} "N" for it. */ function previousTexture() { handleKeyPress(createEvent("N")); } /** * Select next subdivision level and creates an {@link createEvent event} "m" for it. */ function nextLevel() { handleKeyPress(createEvent("m")); } /** * Select previous subdivision level and creates an {@link createEvent event} "M" for it. */ function previousLevel() { handleKeyPress(createEvent("M")); } /** * Increase zoom level and creates an {@link createEvent event} ↓ for it. */ function zoomIn() { handleKeyPress(createEvent("ArrowDown")); } /** * Decrease zoom level and creates an {@link createEvent event} ↑ for it. */ function zoomOut() { handleKeyPress(createEvent("ArrowUp")); } /** * Creates a ray through the pixel at (x, y) * on the canvas, unprojects it, and returns its intersection * against the sphere of radius 1 centered at the origin (0, 0, 0). * @param {Number} x pixel x coordinate. * @param {Number} y pixel y coordinate. * @returns {vec3|null} intersection point in world coordinates or null if no intersection. */ function pixelRayIntersection(x, y) { const viewport = gl.getParameter(gl.VIEWPORT); y = viewport[3] - y; // ray origin in world coordinates const o = unproject( [], vec3.fromValues(x, y, 0), getModelMatrix(), viewMatrix, projection, viewport, ); // ray end point in world coordinates const p = unproject( [], vec3.fromValues(x, y, 1), getModelMatrix(), viewMatrix, projection, viewport, ); return lineSphereIntersection(o, p, [0, 0, 0], 1); } /** *Checks if the device is a touch device.
* It checks for the presence of touch events in the window object * and the maximum number of touch points supported by the device. * This is useful for determining if the application should use touch-specific * features or fall back to mouse events. * @returns {Boolean} true if the device is a touch device, false otherwise. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Touch_events/ Touch events} */ const isTouchDevice = () => { return ( "ontouchstart" in window || navigator.maxTouchPoints > 0 || navigator.msMaxTouchPoints > 0 ); }; /** * Format a number including a plus sign for positive numbers. * @param {Number} num number. * @param {Number} decimals number of digits to appear after the decimal point. * @return {String} a string representing the given number using fixed-point notation. */ function formatNumberWithSign(num, decimals) { let fixedString = num.toFixed(decimals); const eps = 1 / decimals; if (num > eps) { return `+${fixedString.padStart(decimals + 4, "0")}`; } else if (Math.abs(num) < eps) { fixedString = `0.${"0".repeat(decimals)}`; } else { fixedString = `–${fixedString.substring(1).padStart(decimals + 4, "0")}`; } return fixedString; } /** *Updates the {@link currentMeridian current meridian} based on the given pixel position.
* It calculates the {@link pixelRayIntersection intersection} of the pixel ray with the sphere * and converts the intersection point to spherical coordinates. * If the intersection exists, it updates the {@link currentMeridian} variable * and displays the coordinates in the {@link canvastip} element. *Note that there is no cursor position on {@link isTouchDevice touch devices}.
* @param {Number} x pixel x coordinate. * @param {Number} y pixel y coordinate. * @param {Boolean} setCurrentMeridian if true, updates the currentMeridian variable. * @see {@link pixelRayIntersection pixelRayIntersection()} */ function updateCurrentMeridian(x, y, setCurrentMeridian = true) { const intersection = pixelRayIntersection(x, y); if (intersection) { const uv = cartesian2Spherical(intersection); const gcs = spherical2gcs(uv); if (setCurrentMeridian) { currentMeridian.longitude = gcs.longitude; currentMeridian.latitude = gcs.latitude; } if (selector.tooltip) { canvastip.innerHTML = `(${formatNumberWithSign(gcs.longitude, 2)}, ${formatNumberWithSign(gcs.latitude, 2)})`; canvastip.style.display = "block"; } } else { // cursor outside the globe updateCurrentMeridian(...phongHighlight, setCurrentMeridian); } } /** *Saves the current WebGL canvas content as a PNG image.
* Ensure preserveDrawingBuffer is true if needed for capturing post-render content: *Return a {@link gpsCoordinates location} historical figure's last date mentioned.
* In case it is a range of dates (first-second), it returns the first date. * *
I would like to return Date.parse(date). * However, it does not work with BC dates (negative years).
* * @param {String} v location name. * @return {Number} year of the date (negative for BC dates). * @global * @function * @see {@link https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/parse Date.parse()} */ const getDate = (v) => { if (v == "Unknown") return [Number.MAX_VALUE, 0]; // must be the last if (v == "Null_Island") return [Number.MIN_SAFE_INTEGER, 0]; // must be the first // "American Civil War, 12 April 1861-26 May 1865" const remDate = gpsCoordinates[v].remarkable.at(-1).split(","); // ["American Civil War", "12 April 1861-26 May 1865"] if (remDate.length > 1) { let date = remDate.at(-1).split("-"); // ["12 April 1861", "26 May 1865"] date = date[0].trim(); // "12 April 1861" // "4 March 1933 BC" let bc = false; if (remDate.some((s) => s.includes(" BC"))) { // if the date is BC, it must be negative date = date.replace("BC", "").trim().padStart(4, "0"); bc = true; } // date before year 100 is set as 19xx - I gave up... const y = date.substring(date.lastIndexOf(" ")); const d = new Date(date); const year = y.length < 4 ? +y : d.getUTCFullYear(); return [bc ? -year : year, d.getUTCMonth()]; } return [Number.MIN_SAFE_INTEGER, 0]; }; // temporary array holds objects with position and sort-value const mapped = data.map((v, i) => { const d = getDate(v); return { i, year: d[0], month: d[1] }; }); // sorting the mapped array containing the reduced values mapped.sort((a, b) => { if (a.year > b.year) { return 1; } if (a.year < b.year) { return -1; } return a.month - b.month; }); // mapped is now sorted by year and month const timeline = mapped.map((v) => v.year); // and the location names are in the same order // as the sorted mapped array // so we can map the original data to the sorted order // and return the sorted location names const location = mapped.map((v) => data[v.i]); return [location, timeline]; } /** *Appends event listeners to HTML {@link element elements}.
*Also appends event listeners to the rot and mode input radio buttons.
* @see {@link https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener EventTarget: addEventListener()} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/change_event HTMLElement: change event} */ function addListeners() { /** * @summary Executed when the mesh checkbox is checked or unchecked. *Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is click.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is click.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is click.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Fired when a <input type="range"> is in the * {@link https://html.spec.whatwg.org/multipage/input.html#range-state-(type=range) Range state} * (by clicking or using the keyboard).
* * The {@link handleKeyPress callback} argument sets the callback that will be invoked when * the event is dispatched. * * Executed when the slider is changed. * * @summary Appends an event listener for events whose type attribute value is change. * * @param {Event} event a generic event. * @event timeline * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/change_event HTMLElement: change event} */ element.timeline.addEventListener("change", (event) => handleKeyPress(createEvent("X")), ); /** * @summary Executed when the equator checkbox is checked or unchecked. *Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link selectTexture} argument sets the callback that will be invoked when
* the event is dispatched.
Appends an event listener for events whose type attribute value is change.
* The {@link selectModel} argument sets the callback that will be invoked when
* the event is dispatched.
Gets the latitude and longitude on the texture image when clicked upon * and draws its position on the map.
* The pointerdown event is fired when a pointer becomes active. * For mouse, it is fired when the device transitions from no buttons pressed to at least one button pressed. * For touch, it is fired when physical contact is made with the digitizer. * For pen, it is fired when the stylus makes physical contact with the digitizer. * @event pointerdown-textimg * @param {PointerEvent} event a pointer event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/offsetX MouseEvent: offsetX property} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/pointerdown_event Element: pointerdown event} * @see {@link https://caniuse.com/pointer Pointer events} */ element.textimg.addEventListener("pointerdown", (event) => { const x = event.offsetX; let y = event.offsetY; y = event.target.height - y; const uv = { s: x / event.target.width, t: y / event.target.height, }; if (mercator) { // mercator projection uv.t = mercator2Spherical(uv.s, uv.t).t; } const unknown = gpsCoordinates["Unknown"]; ({ latitude: unknown.latitude, longitude: unknown.longitude } = spherical2gcs(uv)); currentLocation = cities.current.at(-2); handleKeyPress(createEvent("g")); }); /** *Displays the u and v normalized coordinates on the texture image * when pointer is moved upon.
*The pointermove event is fired when a pointer changes coordinates, * and the pointer has not been canceled by a browser touch-action. * It's very similar to the mousemove event, but with more features.
* * These events happen whether or not any pointer buttons are pressed. * They can fire at a very high rate, depends on how fast the user moves the pointer, * how fast the machine is, what other tasks and processes are happening, etc. * @event pointermove-textimg * @param {PointerEvent} event a pointer event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/offsetX MouseEvent: offsetX property} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/pointermove_event Element: pointermove event} * @see {@link https://caniuse.com/pointer Pointer events} */ element.textimg.addEventListener("pointermove", (event) => { // tooltip on mouse hoover if (!selector.tooltip) { element.tooltip.innerHTML = ""; element.tooltip.style.display = "none"; return; } const x = event.offsetX; let y = event.offsetY; y = event.target.height - y; const uv = { s: x / event.target.width, t: y / event.target.height, }; if (mercator) { // mercator projection uv.t = mercator2Spherical(uv.s, uv.t).t; } element.tooltip.style.top = `${event.offsetY + 15}px`; element.tooltip.style.left = `${x}px`; // UV normalized element.tooltip.innerHTML = `(${uv.s.toFixed(3)}, ${uv.t.toFixed(3)})`; element.tooltip.style.display = "block"; }); /** *Remove the tooltip when pointer is outside the textimg element.
* * The pointerout event is fired for several reasons including: *Variables moving and {@link clicked} are used to distinguish between a simple click * and a click followed by drag while using the {@link rotator}.
*When the pointer is down, moving is set to false and clicked is set to true. * When the pointer moves, moving is set to true if clicked is also true. * When the pointer is up, if moving is true, both moving and clicked are set to false.
* @type {Boolean} */ let moving = false; /** * We need to know if the pointer is being held down while {@link moving} the globe or not. * Otherwise, we would not be able to distinguish between a click and a drag, * while using the {@link rotator simpleRotator}. * @type {Boolean} */ let clicked = false; /** *Sets {@link moving} to false and {@link clicked} to true.
* The pointerdown event is fired when a pointer becomes active. * For mouse, it is fired when the device transitions from no buttons pressed to at least one button pressed. * For touch, it is fired when physical contact is made with the digitizer. * For pen, it is fired when the stylus makes physical contact with the digitizer. *This behavior is different from mousedown events. * When using a physical mouse, mousedown events fire whenever any button on a mouse is pressed down. * pointerdown events fire only upon the first button press; * subsequent button presses don't fire pointerdown events.
* @event pointerdown-theCanvas * @param {PointerEvent} event a pointer event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/pointerdown_event Element: pointerdown event} * @see {@link https://caniuse.com/pointer Pointer events} */ canvas.addEventListener("pointerdown", (event) => { clicked = true; moving = false; }); /** *Displays the GCS coordinates (longitude and latitude ) * on the globe when pointer is moved upon.
*Sets {@link moving} to true if {@link clicked} is also true.
*The pointermove event is fired when a pointer changes coordinates, * and the pointer has not been canceled by a browser touch-action. * It's very similar to the mousemove event, but with more features.
* * These events happen whether or not any pointer buttons are pressed. * They can fire at a very high rate, depends on how fast the user moves the pointer, * how fast the machine is, what other tasks and processes are happening, etc. * @event pointermove-theCanvas * @param {PointerEvent} event a pointer event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/pointermove_event Element: pointermove event} * @see {@link https://caniuse.com/pointer Pointer events} */ canvas.addEventListener("pointermove", (event) => { if (clicked) { moving = true; clicked = false; // we are moving the globe canvas.style.cursor = "pointer"; if (axis === "q") axis = " "; return; } if (canvas.style.cursor !== "pointer") canvas.style.cursor = "crosshair"; // tooltip on mouse hoover if (moving || !selector.tooltip) { canvastip.innerHTML = ""; canvastip.style.display = "none"; } else { const x = event.offsetX; const y = event.offsetY; cursorPosition = { x, y }; const intersection = pixelRayIntersection(x, y); if (!intersection) { return; } const uv = cartesian2Spherical(intersection); const gcs = spherical2gcs(uv); canvastip.style.top = `${y + 15}px`; canvastip.style.left = `${x}px`; // GCS coordinates canvastip.innerHTML = `(${gcs.longitude.toFixed(3)}, ${gcs.latitude.toFixed(3)})`; canvastip.style.display = "block"; } }); /** *Sets {@link clicked} to false and if {@link moving} is true, sets it to false and return, * because we are moving the globe. * Otherwise, gets the latitude and longitude on the globe * and draws its position on the map.
* The pointerup event is fired when a pointer is no longer active. * This behavior is different from mouseup events. * When using a physical mouse, mouseup events fire whenever any button on a mouse is released. * pointerup events fire only upon the last button release; previous button releases, * while other buttons are held down, don't fire pointerup events. * @event pointerup-theCanvas * @param {PointerEvent} event a pointer event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/pointerup_event Element: pointerup event} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent/offsetX MouseEvent: offsetX property} * @see {@link https://caniuse.com/pointer Pointer events} */ canvas.addEventListener("pointerup", (event) => { //if (event.buttons != 2) return; canvas.style.cursor = "crosshair"; clicked = false; if (moving) { moving = false; return; // ignore if moving } // get the intersection point on the sphere const intersection = pixelRayIntersection(event.offsetX, event.offsetY); // increment or decrement based on the side of the canvas // where the pointer was clicked. let ch = event.offsetX > canvas.width / 2 ? "g" : "G"; if (intersection) { const uv = cartesian2Spherical(intersection); const unknown = gpsCoordinates["Unknown"]; ({ latitude: unknown.latitude, longitude: unknown.longitude } = spherical2gcs(uv)); const position = ch === "g" ? -2 : 0; currentLocation = cities.current.at(position); } else { // clicked outside the globe on the canvas // if clicked on the upper half, rotate around forward vector if (event.offsetY < canvas.height / 2) ch = event.offsetX > canvas.width / 2 ? "B" : "U"; } handleKeyPress(createEvent(ch)); }); /** * No context menu when pressing the right mouse button. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/contextmenu_event Element: contextmenu event} * @event contextmenu * @param {MouseEvent} event mouse event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent MouseEvent} */ window.addEventListener("contextmenu", (event) => { event.preventDefault(); }); /** * Double click as right click. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Element/dblclick_event Element: dblclick event} * @event dblclick * @param {MouseEvent} event mouse event. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent MouseEvent} */ canvas.addEventListener("dblclick", (event) => { const dblclickEvent = new PointerEvent("pointerdown", { pointerType: "mouse", pointerId: 1, clientX: event.clientX, clientY: event.clientY, bubbles: true, cancelable: true, buttons: 2, // right button }); event.preventDefault(); canvas.dispatchEvent(dblclickEvent); }); } /** * Sets up an interval to check for key presses every 2 seconds. * This is useful for simulating key presses or for periodic updates. * The interval will call the {@link handleKeyPress} function with a simulated event * that has the key 'g' pressed, which is used to trigger the next location in the timeline. * This is particularly useful for testing or for automatically cycling through locations. * @param {Number} [delay=4000] - The interval time in milliseconds. * Defaults to 2000 milliseconds (2 seconds). * This function will repeatedly call {@link handleKeyPress} with a simulated event * that has the key 'g' pressed, effectively simulating a key press every 2s. * @return {Number} The ID of the interval that can be used to clear it later. * This ID can be passed to `clearInterval()` to stop the animation. * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Window/setInterval Window: setInterval() method} * @see {@link createEvent} */ function startAnimation(delay = 4000) { return window.setInterval(() => { // Set interval for checking handleKeyPress(createEvent("g")); }, delay); } // export for using in the html file. window.zoomIn = zoomIn; window.zoomOut = zoomOut; window.nextTexture = nextTexture; window.previousTexture = previousTexture; window.nextLevel = nextLevel; window.previousLevel = previousLevel; /** * Code to actually render our geometry. * Draws axes, applies texture, then draws lines. */ function draw() { // clear the framebuffer gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); if (selector.axes) drawAxes(); if (selector.texture) drawTexture(); if (selector.lines) drawLines(); if (selector.equator) drawParallel(); if (selector.locations && isMap) drawLocations(); drawLinesOnImage(); if (isMap) drawLocationsOnImage(); } /** * Returns a new scale model matrix, which applies {@link mscale}. * @returns {mat4} model matrix. */ function getModelMatrix() { return mscale != 1 ? mat4.multiply( [], modelMatrix, mat4.fromScaling([], vec3.fromValues(mscale, mscale, mscale)), ) : modelMatrix; } /** *Texture render the current model.
* Uses the {@link lightingShader}. * *If the attribute "a_TexCoord" is not defined in the vertex shader, * texture coordinates will be calculated pixel by pixel * in the fragment shader.
* *We can also set a uniform attribute (u_mercator) in the shader, * for using a {@link https://hrcak.srce.hr/file/239690 Mercator projection} * instead of an {@link https://en.wikipedia.org/wiki/Equirectangular_projection equirectangular projection}.
*/ function drawTexture() { // bind the shader gl.useProgram(lightingShader); // get the index for the a_Position attribute defined in the vertex shader const positionIndex = gl.getAttribLocation(lightingShader, "a_Position"); if (positionIndex < 0) { console.log("Failed to get the storage location of a_Position"); return; } const normalIndex = gl.getAttribLocation(lightingShader, "a_Normal"); if (normalIndex < 0) { console.log("Failed to get the storage location of a_Normal"); return; } const texCoordIndex = gl.getAttribLocation(lightingShader, "a_TexCoord"); noTexture = texCoordIndex < 0; const u_mercator = gl.getUniformLocation(lightingShader, "u_mercator"); gl.uniform1i(u_mercator, mercator); // "enable" the a_position attribute gl.enableVertexAttribArray(positionIndex); gl.enableVertexAttribArray(normalIndex); // texture coordinates can be calculated in the fragment shader if (!noTexture) gl.enableVertexAttribArray(texCoordIndex); // bind buffers for points gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, vertexNormalBuffer); gl.vertexAttribPointer(normalIndex, 3, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer); if (!noTexture) gl.vertexAttribPointer(texCoordIndex, 2, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, null); // set uniform in shader for projection * view * model transformation let loc = gl.getUniformLocation(lightingShader, "model"); gl.uniformMatrix4fv(loc, false, getModelMatrix()); loc = gl.getUniformLocation(lightingShader, "view"); gl.uniformMatrix4fv(loc, false, viewMatrix); loc = gl.getUniformLocation(lightingShader, "projection"); gl.uniformMatrix4fv(loc, false, projection); loc = gl.getUniformLocation(lightingShader, "normalMatrix"); gl.uniformMatrix3fv( loc, false, makeNormalMatrixElements(modelMatrix, viewMatrix), ); loc = gl.getUniformLocation(lightingShader, "lightPosition"); gl.uniform4f(loc, ...lightPosition); // light and material properties loc = gl.getUniformLocation(lightingShader, "lightProperties"); gl.uniformMatrix3fv(loc, false, lightPropElements.white_light); loc = gl.getUniformLocation(lightingShader, "materialProperties"); gl.uniformMatrix3fv(loc, false, matPropElements.shiny_brass); loc = gl.getUniformLocation(lightingShader, "shininess"); gl.uniform1f(loc, shininess.at(-1)); // need to choose a texture unit, then bind the texture to TEXTURE_2D for that unit const textureUnit = 1; gl.activeTexture(gl.TEXTURE0 + textureUnit); gl.bindTexture(gl.TEXTURE_2D, textureHandle); loc = gl.getUniformLocation(lightingShader, "sampler"); gl.uniform1i(loc, textureUnit); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer); if (theModel.indices) { gl.drawElements( gl.TRIANGLES, theModel.indices.length, theModel.indices.constructor === Uint32Array ? gl.UNSIGNED_INT : gl.UNSIGNED_SHORT, 0, ); } else { gl.drawArrays(gl.TRIANGLES, 0, theModel.vertexPositions.length / 3); } gl.disableVertexAttribArray(positionIndex); gl.disableVertexAttribArray(normalIndex); if (!noTexture) gl.disableVertexAttribArray(texCoordIndex); gl.useProgram(null); } /** *Draws the lines: mesh + normals.
* Uses the {@link colorShader}. *This code takes too long on mobile - too many API calls.
** // draw edges * gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); * gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); * gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer); * for (let i = 0; i < theModel.indices.length; i += 3) { * // offset - two bytes per index (UNSIGNED_SHORT) * gl.drawElements(gl.LINE_LOOP, 3, gl.UNSIGNED_SHORT, i * 2); * } ** The solution is having a single {@link lineBuffer buffer} with all lines, * which was set in {@link createModel}. * @see {@link https://stackoverflow.com/questions/47232671/how-gl-drawelements-find-the-corresponding-vertices-array-buffer How gl.drawElements "find" the corresponding vertices array buffer?} */ function drawLines() { // bind the shader gl.useProgram(colorShader); const positionIndex = gl.getAttribLocation(colorShader, "a_Position"); if (positionIndex < 0) { console.log("Failed to get the storage location of a_Position"); return; } const a_color = gl.getAttribLocation(colorShader, "a_Color"); if (a_color < 0) { console.log("Failed to get the storage location of a_Color"); return; } // use yellow as line color in the colorShader gl.vertexAttrib4f(a_color, 1.0, 1.0, 0.0, 1.0); // "enable" the a_position attribute gl.enableVertexAttribArray(positionIndex); // ------------ draw triangle borders // set transformation to projection * view * model const loc = gl.getUniformLocation(colorShader, "transform"); const transform = mat4.multiply( [], projection, mat4.multiply([], viewMatrix, getModelMatrix()), ); gl.uniformMatrix4fv(loc, false, transform); // draw edges - single pre-computed lineBuffer const len = theModel.indices ? theModel.indices.length : theModel.vertexPositions.length; gl.bindBuffer(gl.ARRAY_BUFFER, lineBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.drawArrays(gl.LINES, 0, 2 * len); // draw normals gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.drawArrays(gl.LINES, 0, 2 * theModel.vertexPositions.length); gl.disableVertexAttribArray(positionIndex); gl.useProgram(null); } /** *
Draws the axes.
* Uses the {@link colorShader}. */ function drawAxes() { // bind the shader gl.useProgram(colorShader); const positionIndex = gl.getAttribLocation(colorShader, "a_Position"); if (positionIndex < 0) { console.log("Failed to get the storage location of a_Position"); return; } const colorIndex = gl.getAttribLocation(colorShader, "a_Color"); if (colorIndex < 0) { console.log("Failed to get the storage location of a_Color"); return; } gl.enableVertexAttribArray(positionIndex); gl.enableVertexAttribArray(colorIndex); // draw axes (not transformed by model transformation) gl.bindBuffer(gl.ARRAY_BUFFER, axisBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, axisColorBuffer); gl.vertexAttribPointer(colorIndex, 4, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, null); // set transformation to projection * view only for extrinsic const loc = gl.getUniformLocation(colorShader, "transform"); const transform = mat4.multiply([], projection, viewMatrix); // set transformation to projection * view * model for intrinsic if (selector.intrinsic) { mat4.multiply(transform, transform, modelMatrix); } gl.uniformMatrix4fv(loc, false, transform); // draw axes gl.drawArrays(gl.LINES, 0, 6); // unbind shader and "disable" the attribute indices // (not really necessary when there is only one shader) gl.disableVertexAttribArray(positionIndex); gl.disableVertexAttribArray(colorIndex); gl.useProgram(null); } /** *Draws a parallel.
* Uses the {@link colorShader}. */ function drawParallel() { // bind the shader gl.useProgram(colorShader); const positionIndex = gl.getAttribLocation(colorShader, "a_Position"); if (positionIndex < 0) { console.log("Failed to get the storage location of a_Position"); return; } const a_color = gl.getAttribLocation(colorShader, "a_Color"); if (a_color < 0) { console.log("Failed to get the storage location of a_Color"); return; } gl.vertexAttrib4f(a_color, 1.0, 0.0, 0.0, 1.0); // "enable" the a_position attribute gl.enableVertexAttribArray(positionIndex); // set transformation to projection * view * model const loc = gl.getUniformLocation(colorShader, "transform"); const transform = mat4.multiply( [], projection, mat4.multiply([], viewMatrix, getModelMatrix()), ); gl.uniformMatrix4fv(loc, false, transform); // draw parallel gl.bindBuffer(gl.ARRAY_BUFFER, parallelBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.drawArrays(gl.LINE_LOOP, 0, nsegments); // draw meridian gl.bindBuffer(gl.ARRAY_BUFFER, meridianBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.drawArrays(gl.LINE_LOOP, 0, nsegments); gl.disableVertexAttribArray(positionIndex); gl.useProgram(null); } /** *Draws all location points.
* Uses the {@link colorShader}. */ function drawLocations() { // bind the shader gl.useProgram(colorShader); const positionIndex = gl.getAttribLocation(colorShader, "a_Position"); if (positionIndex < 0) { console.log("Failed to get the storage location of a_Position"); return; } const colorIndex = gl.getAttribLocation(colorShader, "a_Color"); if (colorIndex < 0) { console.log("Failed to get the storage location of a_Color"); return; } // "enable" the a_position attribute gl.enableVertexAttribArray(positionIndex); gl.enableVertexAttribArray(colorIndex); // set transformation to projection * view * model const loc = gl.getUniformLocation(colorShader, "transform"); const transform = mat4.multiply( [], projection, mat4.multiply([], viewMatrix, getModelMatrix()), ); gl.uniformMatrix4fv(loc, false, transform); // draw locations gl.bindBuffer(gl.ARRAY_BUFFER, locationsBuffer); gl.vertexAttribPointer(positionIndex, 3, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer); gl.vertexAttribPointer(colorIndex, 3, gl.FLOAT, false, 0, 0); gl.drawArrays(gl.POINTS, 0, cities.current.length); gl.disableVertexAttribArray(positionIndex); gl.disableVertexAttribArray(colorIndex); gl.useProgram(null); } /** * Get texture file names from an html <select> element * identified by "textures". * @param {ArrayLoads the {@link image texture image} and {@link gpsCoordinates} asynchronously * and defines its {@link ImageLoadCallback load callback function}.
* @param {Event} event load event. * @callback WindowLoadCallback * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Window/load_event load event} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/Image Image() constructor} * @see {@link https://web.cse.ohio-state.edu/~shen.94/581/Site/Slides_files/texture.pdf Texture Mapping} * @see {@link https://www.evl.uic.edu/pape/data/Earth/ Earth images} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch Using the Fetch API} * @see {@link https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse JSON.parse()} * @event load */ window.addEventListener("load", (event) => { fetch(`${location.protocol}/cwdc/13-webgl/extras/locations.json`) .then((response) => response.text()) .then((json) => { gpsCoordinates = JSON.parse(json); cities.byLongitude = Object.keys(gpsCoordinates); cities.current = cities.byLongitude; currentLocation = cities.current[Math.floor(Math.random() * cities.current.length)]; image = new Image(); /** *Callback after a new texture {@link image} is loaded.
* When called for the first time, it starts the animation. * Otherwise, just loads a new texture. * @callback ImageLoadCallback */ image.onload = function () { // chain the animation or load a new texture if (typeof theModel === "undefined") { if (!element.php.checked) { getTextures(imageFilename); startForReal(image); } else { readFileNames .then((arr) => { const initialTexture = imageFilename[0]; if (arr.length > 0) { imageFilename.splice(0, imageFilename.length, ...arr.sort()); } setTextures(imageFilename); textureCnt = imageFilename.indexOf(initialTexture); startForReal(image); }) .catch((error) => { console.log(`${error}`); // don't return anything => execution goes the normal way // in case server does not run php getTextures(imageFilename); startForReal(image); }); } } else { newTexture(image); draw(); } }; // starts loading the image asynchronously image.src = `./textures/${imageFilename[0]}`; mercator = imageFilename[0].includes("Mercator"); isMap = checkForMapTexture(imageFilename[0]); document.getElementById("mercator").checked = mercator; }) .catch((err) => { console.error(err); }); }); /** *Sets up all buffers for the given (triangulated) model (shape).
* * Uses the webgl {@link vertexBuffer vertex buffer}, * {@link normalBuffer normal buffer}, {@link texCoordBuffer texture buffer} * and {@link indexBuffer index buffer}, created in {@link startForReal}.Also, the Euler characteristic for the model is:
*The number of triangles must be even for a valid triangulation of the sphere:
*Creates a textured model and triggers the animation.
* * Basically this function does setup that "should" only have to be done once,Appends an event listener for events whose type attribute value is keydown.
* The {@link handleKeyPress callback} argument sets the callback that will be invoked when
* the event is dispatched.
A closure holding the type of the model.
* {@link https://vcg.isti.cnr.it/Publications/2012/Tar12/jgt_tarini.pdf Tarini's} * method does not work for objects like polyhedra.Therefore, we only use it for subdivision spheres.
* @return {UVfix} * @function * @see {@link https://gamedev.stackexchange.com/questions/130888/what-are-screen-space-derivatives-and-when-would-i-use-them What are screen space derivatives} * @seeCreates a new texture from an image.
* Uses the {@link lightingShader}. * @param {HTMLImageElement} image texture. * @see {@link https://webglfundamentals.org/webgl/lessons/webgl-3d-textures.html WebGL Textures} * @see {@link https://jameshfisher.com/2020/10/22/why-is-my-webgl-texture-upside-down/ Why is my WebGL texture upside-down?} * @see {@link https://registry.khronos.org/webgl/specs/latest/2.0/#4.1.3 Non-Power-of-Two Texture Access} * @see {@link https://www.youtube.com/watch?v=qMCOX3m-R28 What are Mipmaps?} */ function newTexture(image) { gl.useProgram(lightingShader); const imgSize = document.getElementById("size"); imgSize.innerHTML = `${imageFilename[textureCnt]}`; const textimg = document.getElementById("textimg"); textimg.src = image.src; textimg.onload = () => { const canvasimg = document.getElementById("canvasimg"); canvasimg.width = textimg.width; canvasimg.height = textimg.height; if (selector.paused) { drawLinesOnImage(); if (isMap) drawLocationsOnImage(); } }; document.getElementById("figc").textContent = `(${image.width} x ${image.height})`; document.getElementById("textures").value = String(textureCnt); // bind the texture gl.bindTexture(gl.TEXTURE_2D, textureHandle); /* * (0,0) in the image coordinate system is the top left corner, * and the (0,0) in the texture coordinate system is bottom left. * Therefore, load the image bytes to the currently bound texture, * flipping the vertical. */ gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image); if ( typeof WebGL2RenderingContext !== "undefined" || (isPowerOf2(image.width) && isPowerOf2(image.height)) ) { setUVfix(); // texture parameters are stored with the texture gl.generateMipmap(gl.TEXTURE_2D); // texture magnification filter - default is gl.LINEAR (blurred) gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); // reset defaults // texture minification filter gl.texParameteri( gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR, ); // wrapping function for texture coordinate s gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT); // wrapping function for texture coordinate t gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT); } else { // NPOT setUVfix(); // texture minification filter gl.texParameteri( gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR, // default is gl.NEAREST_MIPMAP_LINEAR ); // wrapping function for texture coordinate s (default is gl.REPEAT) gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT); // wrapping function for texture coordinate t (default is gl.REPEAT) gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT); } gl.useProgram(null); } /** * Returns a vector perpendicular to the meridian at the given longitude. * The meridian is the line of longitude at the given longitude, * which is the angle from the prime meridian (0° longitude). * The perpendicular vector is in the xz-plane, with y = 0. * * @param {Number} longitude meridian longitude. * @returns {vec3} vector perpendicular to the meridian at the given longitude. */ function meridianPerpVec(longitude) { const [x, y, z] = spherical2Cartesian(toRadian(longitude), Math.PI / 2, 1); return [z, 0, -x]; } /** *Returns a rotation matrix around the vector perpendicular to the * given meridian, by the given increment.
* Ensure longitude is in [0,180) range, * so that the perpendicular vector does not change direction * if longitude is in the western hemisphere. * @param {mat4} out the receiving matrix. * @param {GCS} meridian given meridian. * @param {Number} increment angle (in radians) to rotate around. * @returns {mat4} out. */ function meridianMatrix(out, meridian, increment) { let longitude = meridian?.longitude || 0; if (longitude < 0) { longitude += 180; } const perp = meridianPerpVec(longitude); mat4.fromRotation(out, increment, perp); return out; } /** *Define an {@link frame animation} loop.
* Step 0.5° ⇒ 60 fps = 30°/s ⇒ 360° in 12s * @see {@link https://dominicplein.medium.com/extrinsic-intrinsic-rotation-do-i-multiply-from-right-or-left-357c38c1abfd Extrinsic & intrinsic rotation} */ const animate = (() => { /** * Increase the rotation by some amount, * irrespective of the axis chosen. * @type {Number} */ const increment = toRadian(0.5); /** * An unsigned long integer value, the request ID, * that uniquely identifies the entry in the callback list. * You should not make any assumptions about its value. * You can pass this value to window.cancelAnimationFrame() * to cancel the refresh callback request. * @type {Number} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame Window: requestAnimationFrame() method} */ let requestID = 0; /** *Rotation matrix for the three axes.
* The rotation matrices are created at compile (loading) time, so that * they can be reused in each frame without recalculating them. * The rotation matrices are used to rotate the model * around the x, y, or z axis, depending on the axis chosen. * The rotation is done by multiplying the model matrix with the * rotation matrix, either on the left (extrinsic rotation) or * on the right (intrinsic rotation). * @type {Object} * @global * @property {mat4} x rotation matrix around the x-axis. * @property {mat4} y rotation matrix around the y-axis. * @property {mat4} z rotation matrix around the z-axis. */ const rotMatrix = { x: mat4.fromXRotation([], increment), y: mat4.fromYRotation([], increment), z: mat4.fromZRotation([], increment), }; /** * Callback to keep drawing frames. * @callback frame * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame Window: requestAnimationFrame() method} * @see {@link https://developer.mozilla.org/en-US/docs/Web/API/Window/cancelAnimationFrame Window: cancelAnimationFrame() method} */ return () => { draw(); if (requestID != 0) { cancelAnimationFrame(requestID); requestID = 0; } if (!selector.paused) { if (!isTouchDevice()) { updateCurrentMeridian(cursorPosition.x, cursorPosition.y, false); } else { // on touch devices, use the phong highlight position updateCurrentMeridian(...phongHighlight, false); } const rotationMatrix = axis === "q" ? meridianMatrix([], currentMeridian, increment) : rotMatrix[axis]; if (selector.intrinsic) { // intrinsic rotation - multiply on the right mat4.multiply(modelMatrix, modelMatrix, rotationMatrix); } else { // extrinsic rotation - multiply on the left mat4.multiply(modelMatrix, rotationMatrix, modelMatrix); } rotator.setViewMatrix(modelMatrix); // request that the browser calls animate() again "as soon as it can" requestID = requestAnimationFrame(animate); } else { modelMatrix = rotator.getViewMatrix(); } }; })();