®
The World’s Top Talent, On Demand ®

Copyright 2010 - 2024 Toptal, LLC

\n \n\n\n\n

The requestAnimationFrame 调用导致循环在上一帧完成渲染和所有事件处理完成后再次被调用.

\n\n

Vertex Buffer Objects

\n\n

你需要做的第一件事是定义你想要画的顶点. 你可以通过在三维空间中的向量来描述它们. 之后,您想要通过创建一个新的内存来将该数据移动到GPU内存中 Vertex Buffer Object (VBO).

\n\n

A Buffer Object 通常是在GPU上存储内存块数组的对象. 它是一个VBO,只是表示GPU可以使用内存做什么. 大多数情况下,您创建的缓冲区对象将是vbo.

\n\n

You can fill the VBO by taking all N 顶点,并创建一个浮点数数组 3N 元素用于顶点位置和顶点法线vbo,以及 2N for the texture coordinates VBO. Each group of three floats, or two floats for UV coordinates, 表示顶点的各个坐标. 然后我们将这些数组传递给GPU,我们的顶点已经为管道的其余部分做好了准备.

\n\n

由于数据现在在GPU RAM上,您可以从通用RAM中删除它. 也就是说,除非您想稍后修改它并再次上传它. 每次修改之后都需要进行上传, 因为JS数组中的修改并不适用于实际GPU RAM中的vbo.

\n\n

下面的代码示例提供了所描述的所有功能. 需要注意的一个重要事实是,存储在GPU上的变量不会被垃圾收集. 这意味着一旦我们不想再使用它们,我们必须手动删除它们. 我给你们举个例子, 我就不再关注这个概念了. 只有当您计划在整个程序中停止使用某些几何形状时,才需要从GPU中删除变量.

\n\n

We also added serialization to our Geometry class and elements within it.

\n\n
Geometry.prototype.vertexCount = function () {\n  return this.faces.length * 3\n}\n\nGeometry.prototype.positions = function () {\n  var answer = []\n  this.faces.forEach(function (face) {\n    face.vertices.forEach(function (vertex) {\n      var v = vertex.position\n      answer.push(v.x, v.y, v.z)\n    })\n  })\n  return answer\n}\n\nGeometry.prototype.normals = function () {\n  var answer = []\n  this.faces.forEach(function (face) {\n    face.vertices.forEach(function (vertex) {\n      var v = vertex.normal\n      answer.push(v.x, v.y, v.z)\n    })\n  })\n  return answer\n}\n\nGeometry.prototype.uvs = function () {\n  var answer = []\n  this.faces.forEach(function (face) {\n    face.vertices.forEach(function (vertex) {\n      var v = vertex.uv\n      answer.push(v.x, v.y)\n    })\n  })\n  return answer\n}\n\n////////////////////////////////\n\nfunction VBO (gl, data, count) {\n  //在GPU RAM中创建缓冲区对象,我们可以存储任何东西\n  var bufferObject = gl.createBuffer()\n  //告诉我们想要作为VBO操作的缓冲区对象\n  gl.bindBuffer(gl.ARRAY_BUFFER, bufferObject)\n  //写入数据,并设置标志为优化\n  //对写入的数据进行罕见的修改\n  gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW)\n  this.gl = gl\n  this.size = data.length / count\n  this.count = count\n  this.data = bufferObject\n}\n\nVBO.prototype.destroy = function () {\n  //释放缓冲区对象占用的内存\n  this.gl.deleteBuffer(this.data)\n}\n
\n\n

The VBO 数据类型在传递的WebGL上下文中生成VBO, 基于作为第二个参数传递的数组.

\n\n

You can see three calls to the gl context. The createBuffer() call creates the buffer. The bindBuffer() 调用告诉WebGL状态机使用这个特定的内存作为当前的VBO (ARRAY_BUFFER),直至另行通知为止. 之后,我们将当前VBO的值设置为所提供的数据 bufferData().

\n\n

我们还提供了一个destroy方法,通过使用,从GPU RAM中删除缓冲区对象 deleteBuffer().

\n\n

您可以使用三个vbo和一个变换来描述网格的所有属性, together with its position.

\n\n
function Mesh (gl, geometry) {\n  var vertexCount = geometry.vertexCount()\n  this.positions = new VBO(gl, geometry.positions(), vertexCount)\n  this.normals = new VBO(gl, geometry.normals(), vertexCount)\n  this.uvs = new VBO(gl, geometry.uvs(), vertexCount)\n  this.vertexCount = vertexCount\n  this.position = new Transformation()\n  this.gl = gl\n}\n\nMesh.prototype.destroy = function () {\n  this.positions.destroy()\n  this.normals.destroy()\n  this.uvs.destroy()\n}\n
\n\n

As an example, here is how we can load a model, store its properties in the mesh, and then destroy it:

\n\n
Geometry.loadOBJ('/assets/model.obj').then(function (geometry) {\n  var mesh = new Mesh(gl, geometry)\n  console.log(mesh)\n  mesh.destroy()\n})\n
\n\n

Shaders

\n\n

下面是前面描述的将点移动到所需位置并绘制所有单个像素的两步过程. 为此,我们编写了一个在显卡上多次运行的程序. 这个程序通常由至少两个部分组成. The first part is a Vertex Shader, which is run for each vertex, 并输出我们应该在屏幕上放置顶点的位置, among other things. The second part is the Fragment Shader, 对于屏幕上三角形所覆盖的每个像素,哪个运行, 并输出像素应该绘制到的颜色.

\n\n

Vertex Shaders

\n\n

假设你想要一个在屏幕上左右移动的模型. 在一种简单的方法中,您可以更新每个顶点的位置并将其重新发送给GPU. That process is expensive and slow. Alternatively, 你会给GPU一个程序来运行每个顶点, 所有这些操作都是通过处理器并行完成的,而处理器正是为完成这些任务而设计的. That is the role of a vertex shader.

\n\n

顶点着色器是处理单个顶点的渲染管道的一部分. 调用顶点着色器接收单个顶点,并在应用所有可能的顶点转换后输出单个顶点.

\n\n

Shaders are written in GLSL. 这种语言有很多独特的元素, but most of the syntax is very C-like, 所以对大多数人来说应该是可以理解的.

\n\n

有三种类型的变量可以进出顶点着色器, and all of them serve a specific use:

\n\n\n\n

So, 假设你想创建一个接收位置的顶点着色器, normal, and uv coordinates for each vertex, and a position, view (inverse camera position), 以及每个渲染对象的投影矩阵. 假设你还想根据它们的uv坐标和法线来绘制单个像素. “How would that code look?” you might ask.

\n\n
attribute vec3 position;\nattribute vec3 normal;\nattribute vec2 uv;\nuniform mat4 model;\nuniform mat4 view;\nuniform mat4 projection;\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    vUv = uv;\n    vNormal = (model * vec4(normal, 0.)).xyz;\n    gl_Position =投影*视图*模型* vec4(位置,1.);\n}\n
\n\n

这里的大多数元素应该是不言自明的. 中没有返回值,这是需要注意的关键 main function. 我们想要返回的所有值都被赋值为 varying variables, or to special variables. Here we assign to gl_Position,这是一个四维向量,因此最后一个维度应该总是被设置为1. 你可能会注意到另一件奇怪的事情是我们构造a的方式 vec4 out of the position vector. You can construct a vec4 by using four floats, two vec2S,或任何其他组成四个元素的组合. 有很多看起来很奇怪的类型转换,一旦你熟悉了变换矩阵,它们就很有意义了.

\n\n

你也可以看到这里我们可以很容易地进行矩阵变换. GLSL是专门为这种工作而设计的. 通过乘以投影计算输出位置, view, 对矩阵建模并应用到位置上. 输出法线只是转换到世界空间. 稍后我们将解释为什么我们在这里停止了正常的转换.

\n\n

现在,我们将保持简单,并继续绘制单个像素.

\n\n

Fragment Shaders

\n\n

A fragment shader 光栅化后的步骤在图形管道中吗. 它为正在绘制的对象的每个像素生成颜色、深度和其他数据.

\n\n

实现片段着色器背后的原理与顶点着色器非常相似. 不过,它们有三个主要区别:

\n\n\n\n

With that in mind, 你可以很容易地编写一个着色器,根据U的位置绘制红色通道, green channel based on the V position, and sets the blue channel to maximum.

\n\n
#ifdef GL_ES\nprecision highp float;\n#endif\n\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    vec2 clampedUv = clamp(vUv, 0., 1.);\n    gl_FragColor = vec4(clampedUv, 1., 1.);\n}\n
\n\n

The function clamp 只是将对象中的所有浮点数限制在给定的范围内. 剩下的代码应该非常简单.

\n\n

考虑到所有这些,剩下的就是在WebGL中实现它.

\n\n

Combining Shaders into a Program

\n\n

下一步是将着色器合并到一个程序中:

\n\n
ShaderProgram (gl, vertsc, fragSrc) {\n  var vert = gl.createShader(gl.VERTEX_SHADER)\n  gl.shaderSource(vert, vertSrc)\n  gl.compileShader(vert)\n  if (!gl.getShaderParameter(vert, gl.COMPILE_STATUS)) {\n    console.error(gl.getShaderInfoLog(vert))\n    抛出新的错误('编译shader失败')\n  }\n\n  var frag = gl.createShader(gl.FRAGMENT_SHADER)\n  gl.shaderSource(frag, fragSrc)\n  gl.compileShader(frag)\n  if (!gl.getShaderParameter(frag, gl.COMPILE_STATUS)) {\n    console.error(gl.getShaderInfoLog(frag))\n    抛出新的错误('编译shader失败')\n  }\n\n  var program = gl.createProgram()\n  gl.attachShader(program, vert)\n  gl.attachShader(program, frag)\n  gl.linkProgram(program)\n  if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {\n    console.error(gl.getProgramInfoLog(program))\n    抛出新的错误('链接程序失败')\n  }\n\n  this.gl = gl\n  this.position = gl.getAttribLocation(program, 'position')\n  this.normal = gl.getAttribLocation(program, 'normal')\n  this.uv = gl.getAttribLocation(program, 'uv')\n  this.model = gl.getUniformLocation(program, 'model')\n  this.view = gl.getUniformLocation(program, 'view')\n  this.projection = gl.getUniformLocation(计划,“投影”)\n  this.vert = vert\n  this.frag = frag\n  this.program = program\n}\n\n//从给定的url加载着色器文件,并返回一个程序作为承诺\nShaderProgram.load = function (gl, vertUrl, fragUrl) {\n  return Promise.所有([loadFile (vertUrl) loadFile (fragUrl))).then(function (files) {\n    返回新的ShaderProgram(gl, files[0], files[1])\n  })\n\n  function loadFile (url) {\n    return new Promise(function (resolve) {\n      var xhr = new XMLHttpRequest()\n      xhr.onreadystatechange = function () {\n        if (xhr.readyState == XMLHttpRequest.DONE) {\n          resolve(xhr.responseText)\n        }\n      }\n      xhr.open('GET', url, true)\n      xhr.send(null)\n    })\n  }\n}\n
\n\n

这里发生的事情没什么可说的. 每个着色器被分配一个字符串作为源并编译, 之后,我们检查是否存在编译错误. 然后,我们通过链接这两个着色器创建一个程序. 最后,我们为后代存储指向所有相关属性和制服的指针.

\n\n

Actually Drawing the Model

\n\n

Last, but not least, you draw the model.

\n\n

首先,你选择你想要使用的着色程序.

\n\n
ShaderProgram.prototype.use = function () {\n  this.gl.useProgram(this.program)\n}\n
\n\n

然后将所有与摄像机相关的制服发送给GPU. 这些制服只在每次镜头变换或移动时更换一次.

\n\n
Transformation.prototype.sendToGpu = function (gl, uniform,转置){\n  gl.uniformMatrix4fv(uniform,转置|| false, new Float32Array(this.fields))\n}\n\nCamera.prototype.use = function (shaderProgram) {\n  this.projection.sendToGpu(shaderProgram.gl, shaderProgram.projection)\n  this.getInversePosition().sendToGpu(shaderProgram.gl, shaderProgram.view)\n}\n
\n\n

Finally, 您将转换和vbo分配给制服和属性, respectively. 由于必须对每个VBO执行此操作,因此可以将其数据绑定创建为方法.

\n\n
VBO.prototype.bindToAttribute = function (attribute) {\n  var gl = this.gl\n  //告诉我们想要作为VBO操作的缓冲区对象\n  gl.bindBuffer(gl.ARRAY_BUFFER, this.data)\n  // Enable this attribute in the shader\n  gl.enableVertexAttribArray(attribute)\n  // Define format of the attribute array. Must match parameters in shader\n  gl.vertexAttribPointer(attribute, this.size, gl.FLOAT, false, 0, 0)\n}\n
\n\n

然后给制服分配一个包含三个浮点数的数组. 每种制服都有不同的签名,所以 documentation and more documentation are your friends here. 最后,在屏幕上绘制三角形数组. You tell the drawing call drawArrays() 从哪个顶点开始,画多少顶点. 传递的第一个参数告诉WebGL如何解释顶点数组. Using TRIANGLES 取3 * 3的顶点,并为每个三元组画一个三角形. Using POINTS 会为每个经过的顶点画一个点吗. 有更多的选择,但没有必要一次发现所有的东西. 下面是绘制对象的代码:

\n\n
Mesh.prototype.draw = function (shaderProgram) {\n  this.positions.bindToAttribute(shaderProgram.position)\n  this.normals.bindToAttribute(shaderProgram.normal)\n  this.uvs.bindToAttribute(shaderProgram.uv)\n  this.position.sendToGpu(this.gl, shaderProgram.model)\n  this.gl.drawArrays(this.gl.TRIANGLES, 0, this.vertexCount)\n}\n
\n\n

渲染器需要稍微扩展一下,以容纳需要处理的所有额外元素. 应该可以附加一个着色器程序, 并根据当前摄像机位置渲染物体数组.

\n\n
Renderer.prototype.setShader = function (shader) {\n  this.shader = shader\n}\n\nRenderer.prototype.render = function (camera, objects) {\n  this.gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)\n  var shader = this.shader\n  if (!shader) {\n    return\n  }\n  shader.use()\n  camera.use(shader)\n  objects.forEach(function (mesh) {\n    mesh.draw(shader)\n  })\n}\n
\n\n

我们可以结合所有元素,最终在屏幕上画出一些东西:

\n\n
var renderer = new Renderer(document.getElementById('webgl-canvas'))\nrenderer.setClearColor(100, 149, 237)\nvar gl = renderer.getContext()\n\nvar objects = []\n\nGeometry.loadOBJ('/assets/sphere.obj').then(function (data) {\n  objects.push(new Mesh(gl, data))\n})\nShaderProgram.load(gl, '/shaders/basic.vert', '/shaders/basic.frag')\n             .then(function (shader) {\n               renderer.setShader(shader)\n             })\n\nvar camera = new Camera()\ncamera.setOrthographic(16, 10, 10)\n\nloop()\n\nfunction loop () {\n  renderer.render(camera, objects)\n  requestAnimationFrame(loop)\n}\n
\n\n

\"对象绘制在画布上,颜色取决于UV坐标\"

\n\n

This looks a bit random, 但是你可以看到球体的不同区域, based on where they are on the UV map. 你可以改变着色器,把物体涂成棕色. 只需将每个像素的颜色设置为棕色的RGBA:

\n\n
#ifdef GL_ES\nprecision highp float;\n#endif\n\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    vec3 brown = vec3(.54, .27, .07);\n    gl_FragColor = vec4(brown, 1.);\n}\n
\n\n

\"Brown

\n\n

It doesn’t look very convincing. 看起来场景需要一些阴影效果.

\n\n

Adding Light

\n\n

光和影是我们感知物体形状的工具. 灯有许多形状和大小:聚光灯在一个锥体上发光, 向四面八方传播光线的灯泡, and most interestingly, the sun, 什么东西离我们如此之远,以至于它照在我们身上的光都辐射出去了, for all intents and purposes, in the same direction.

\n\n

阳光听起来是最容易实现的, 因为你所需要提供的只是所有光线传播的方向. 对于在屏幕上绘制的每个像素, 你检查光线照射物体的角度. 这就是表面法线的用武之地.

\n\n

\"演示光线和表面法线之间的角度,为平面和平滑着色\"

\n\n

你可以看到所有的光线都朝同一个方向流动, 以不同的角度撞击表面, 哪些是基于光线和表面法线之间的夹角. 它们越重合,光就越强.

\n\n

如果你在光线的归一化向量和表面法线之间做点积, 如果光线完全垂直地照射到表面,你会得到-1, 0 if the ray is parallel to the surface, 如果从另一边照射,则为1. 所以0和1之间的任何值都不应该加光, 而0到-1之间的数字应该逐渐增加击中物体的光量. 你可以通过在着色器代码中添加一个固定的光来测试这一点.

\n\n
#ifdef GL_ES\nprecision highp float;\n#endif\n\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    vec3 brown = vec3(.54, .27, .07);\n    vec3 sunlightDirection = vec3(-1., -1., -1.);\n    浮动亮度= -clamp(dot(normalize(vNormal), normalize(sunlightDirection)), -1., 0.);\n    gl_FragColor = vec4(棕色*亮度,1.);\n}\n
\n\n

\"Brown

\n\n

我们让太阳朝前、左、下的方向照. 你可以看到阴影是多么平滑,即使模型是锯齿状的. 你还可以注意到左下角有多暗. 我们可以添加一个环境光级别,这将使阴影区域更亮.

\n\n
#ifdef GL_ES\nprecision highp float;\n#endif\n\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    vec3 brown = vec3(.54, .27, .07);\n    vec3 sunlightDirection = vec3(-1., -1., -1.);\n    浮动亮度= -clamp(dot(normalize(vNormal), normalize(sunlightDirection)), -1., 0.);\n    float ambientLight = 0.3;\n    lightness = ambientLight + (1. - ambientLight) * lightness;\n    gl_FragColor = vec4(棕色*亮度,1.);\n}\n
\n\n

\"带有阳光和环境光的棕色物体\"

\n\n

您可以通过引入轻类来达到同样的效果, 哪个存储光的方向和环境光的强度. 然后你可以改变片段着色器来适应这个添加.

\n\n

Now the shader becomes:

\n\n
#ifdef GL_ES\nprecision highp float;\n#endif\n\nuniform vec3 lightDirection;\nuniform float ambientLight;\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    vec3 brown = vec3(.54, .27, .07);\n    浮动亮度= -clamp(dot(normalize(vNormal), normalize(lightDirection)), -1., 0.);\n    lightness = ambientLight + (1. - ambientLight) * lightness;\n    gl_FragColor = vec4(棕色*亮度,1.);\n}\n
\n\n

Then you can define the light:

\n\n
function Light () {\n  this.lightDirection = new Vector3(-1, -1, -1)\n  this.ambientLight = 0.3\n}\n\nLight.prototype.use = function (shaderProgram) {\n  var dir = this.lightDirection\n  var gl = shaderProgram.gl\n  gl.uniform3f(shaderProgram.lightDirection, dir.x, dir.y, dir.z)\n  gl.uniform1f(shaderProgram.ambientLight, this.ambientLight)\n}\n
\n\n

在shader program类中,添加所需的制服:

\n\n
this.ambientLight = gl.getUniformLocation(计划,“ambientLight”)\nthis.lightDirection = gl.getUniformLocation(计划,“lightDirection”)\n
\n\n

在程序中,在渲染器中添加一个对新光源的调用:

\n\n
Renderer.prototype.Render = function (camera, light, objects) {\n  this.gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)\n  var shader = this.shader\n  if (!shader) {\n    return\n  }\n  shader.use()\n  light.use(shader)\n  camera.use(shader)\n  objects.forEach(function (mesh) {\n    mesh.draw(shader)\n  })\n}\n
\n\n

The loop will then change slightly:

\n\n
var light = new Light()\n\nloop()\n\nfunction loop () {\n  renderer.render(camera, light, objects)\n  requestAnimationFrame(loop)\n}\n
\n\n

If you’ve done everything right, 然后渲染的图像应该与上一张图像中的图像相同.

\n\n

最后一步要考虑的是添加一个实际的纹理到我们的模型. Let’s do that now.

\n\n

Adding Textures

\n\n

HTML5对加载图像有很好的支持,所以没有必要做疯狂的图像解析. Images are passed to GLSL as sampler2D 通过告诉着色器要采样的绑定纹理. 可以绑定的纹理数量有限,限制是基于所使用的硬件. A sampler2D 可以查询特定位置的颜色吗. This is where UV coordinates come in. 这是一个例子,我们用采样的颜色代替棕色.

\n\n
#ifdef GL_ES\nprecision highp float;\n#endif\n\nuniform vec3 lightDirection;\nuniform float ambientLight;\nuniform sampler2D diffuse;\nvarying vec3 vNormal;\nvarying vec2 vUv;\n\nvoid main() {\n    浮动亮度= -clamp(dot(normalize(vNormal), normalize(lightDirection)), -1., 0.);\n    lightness = ambientLight + (1. - ambientLight) * lightness;\n    gl_FragColor = vec4(texture2D(diffuse, vUv)).rgb * lightness, 1.);\n}\n
\n\n

新的制服必须添加到着色器程序的列表中:

\n\n
this.diffuse = gl.getUniformLocation(program, 'diffuse')\n
\n\n

最后,我们将实现纹理加载. 如前所述,HTML5提供了加载图像的工具. 我们所需要做的就是将图像发送到GPU:

\n\n
function Texture (gl, image) {\n  var texture = gl.createTexture()\n  //设置新创建的纹理上下文为活动纹理\n  gl.bindTexture(gl.TEXTURE_2D, texture)\n  //设置纹理参数,传递纹理所基于的图像\n  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image)\n  // Set filtering methods\n  //通常着色器会查询像素之间的纹理值,\n  //指示如何计算该值\n  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR)\n  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)\n  this.data = texture\n  this.gl = gl\n}\n\nTexture.prototype.use = function (uniform, binding) {\n  binding = Number(binding) || 0\n  var gl = this.gl\n  //我们可以绑定多个纹理,这里我们选择其中的绑定\n  // we're setting right now\n  gl.activeTexture(gl['TEXTURE' + binding])\n  //选择绑定后,我们设置纹理\n  gl.bindTexture(gl.TEXTURE_2D, this.data)\n  //最后,我们将使用的绑定ID传递给uniform\n  gl.uniform1i(uniform, binding)\n  //前3行相当于:\n  // texture[i] = this.data\n  // uniform = i\n}\n\nTexture.load = function (gl, url) {\n  return new Promise(function (resolve) {\n    var image = new Image()\n    image.onload = function () {\n      resolve(new Texture(gl, image))\n    }\n    image.src = url\n  })\n}\n
\n\n

该过程与用于加载和绑定vbo的过程没有太大区别. 主要区别在于我们不再绑定属性, 而是将纹理的索引绑定到一个整数统一. The sampler2D Type只不过是指向纹理的指针偏移量.

\n\n

现在所需要做的就是扩展 Mesh class, to handle textures as well:

\n\n
function Mesh (gl, geometry, texture){//添加纹理\n  var vertexCount = geometry.vertexCount()\n  this.positions = new VBO(gl, geometry.positions(), vertexCount)\n  this.normals = new VBO(gl, geometry.normals(), vertexCount)\n  this.uvs = new VBO(gl, geometry.uvs(), vertexCount)\n  this.texture = texture // new\n  this.vertexCount = vertexCount\n  this.position = new Transformation()\n  this.gl = gl\n}\n\nMesh.prototype.destroy = function () {\n  this.positions.destroy()\n  this.normals.destroy()\n  this.uvs.destroy()\n}\n\nMesh.prototype.draw = function (shaderProgram) {\n  this.positions.bindToAttribute(shaderProgram.position)\n  this.normals.bindToAttribute(shaderProgram.normal)\n  this.uvs.bindToAttribute(shaderProgram.uv)\n  this.position.sendToGpu(this.gl, shaderProgram.model)\n  this.texture.use(shaderProgram.diffuse, 0) // new\n  this.gl.drawArrays(this.gl.TRIANGLES, 0, this.vertexCount)\n}\n\nMesh.load = function (gl, modelUrl, textureUrl){//新的\n  var geometry = Geometry.loadOBJ(modelUrl)\n  var texture = Texture.load(gl, textureUrl)\n  return Promise.all([geometry, texture]).then(function (params) {\n    return new Mesh(gl, params[0], params[1])\n  })\n}\n
\n\n

最后的主脚本如下所示:

\n\n
var renderer = new Renderer(document.getElementById('webgl-canvas'))\nrenderer.setClearColor(100, 149, 237)\nvar gl = renderer.getContext()\n\nvar objects = []\n\nMesh.load(gl, '/assets/sphere.obj', '/assets/diffuse.png')\n    .then(function (mesh) {\n      objects.push(mesh)\n    })\n\nShaderProgram.load(gl, '/shaders/basic.vert', '/shaders/basic.frag')\n             .then(function (shader) {\n               renderer.setShader(shader)\n             })\n\nvar camera = new Camera()\ncamera.setOrthographic(16, 10, 10)\nvar light = new Light()\n\nloop()\n\nfunction loop () {\n  renderer.render(camera, light, objects)\n  requestAnimationFrame(loop)\n}\n
\n\n

\"Textured

\n\n

Even animating comes easy at this point. 如果你想让相机围绕我们的对象旋转,你可以通过添加一行代码来实现:

\n\n
function loop () {\n  renderer.render(camera, light, objects)\n  camera.position = camera.position.rotateY(Math.PI / 120)\n  requestAnimationFrame(loop)\n}\n
\n\n

\"Rotated

\n\n

Feel free to play around with shaders. 添加一行代码将把这个现实的照明变成卡通的东西.

\n\n
void main() {\n    浮动亮度= -clamp(dot(normalize(vNormal), normalize(lightDirection)), -1., 0.);\n    lightness = lightness > 0.1 ? 1. : 0.; // new\n    lightness = ambientLight + (1. - ambientLight) * lightness;\n    gl_FragColor = vec4(texture2D(diffuse, vUv)).rgb * lightness, 1.);\n}\n
\n\n

这很简单,就像根据光线是否超过设定的阈值来告诉光线进入它的极端一样.

\n\n

\"Head

\n\n

Where to Go Next

\n\n

学习WebGL的所有技巧和复杂性有许多信息来源. 最好的部分是,如果你找不到与WebGL相关的答案, you can look for it in OpenGL, 因为WebGL基本上是基于OpenGL的一个子集, with some names being changed.

\n\n

In no particular order, 这里有一些更详细的信息来源, both for WebGL and OpenGL.

\n\n\n","as":"div","isContentFit":true,"sharingWidget":{"url":"http://6dq7.hurongyun168.com/javascript/3d-graphics-a-webgl-tutorial","title":"3D Graphics: A WebGL Tutorial","text":null,"providers":["linkedin","twitter","facebook"],"gaCategory":null,"domain":{"name":"developers","title":"Engineering","vertical":{"name":"developers","title":"Developers","publicUrl":"http://6dq7.hurongyun168.com/developers"},"publicUrl":"http://6dq7.hurongyun168.com/developers/blog"},"hashtags":"Tutorial,WebGL,3DGraphics"}}