在浏览最新的获奖网站时,您可能会注意到很多花哨的图像扭曲动画或简洁的3D效果。大多数都是使用WebGL创建的,这是一个允许GPU加速图像处理效果和动画的API。它们也倾向于使用构建在WebGL之上的库,例如three.js或pixi.js。两者都是创建3D和2D场景的非常强大的工具。
但是,您应该记住,这些库最初并非旨在创建幻灯片或为DOM元素设置动画。不过,有一个库专门为此而设计,我们将在本文中介绍如何使用它。

WebGL、CSS 定位和响应式设计
假设您正在使用某个库(例如 three.js 或 pixi.js),并且希望使用它来创建交互,例如元素上的鼠标悬停和滚动事件。您可能会遇到麻烦!如何相对于文档和其他 DOM 元素定位您的 WebGL 元素?如何处理响应式设计?
这正是我在创建curtains.js时想到的。
Curatins.js 允许您创建包含图像和视频的平面(在 WebGL 中,我们称之为纹理),这些平面充当普通 HTML 元素,其位置和大小由 CSS 规则定义。但是,这些平面可以通过 WebGL 和着色器的无限可能性来增强。
等等,着色器?
着色器是用GLSL编写的程序,它会告诉您的GPU如何渲染您的平面。了解着色器的工作原理在这里是必须的,因为这就是我们处理动画的方式。如果您从未听说过它们,您可能需要先学习基础知识。有很多好的网站可以开始学习它们,例如着色器之书。
现在您已经了解了基本概念,让我们创建第一个平面!
基本平面的设置
要显示我们的第一个平面,我们需要一些 HTML、CSS 和一些 JavaScript 来创建平面。然后我们的着色器将对其进行动画处理。
HTML
这里的 HTML 将非常简单。我们将创建一个<div>
来容纳我们的画布,以及一个<div>
来容纳我们的图像。
<body>
<!-- div that will hold our WebGL canvas -->
<div id="canvas"></div>
<!-- div used to create our plane -->
<div class="plane">
<!-- image that will be used as a texture by our plane -->
<img src="path/to/my-image.jpg" />
</div>
</body>
CSS
我们将使用 CSS 确保包裹画布的<div>
将覆盖窗口,并将任何大小应用于平面 div。(我们的 WebGL 平面将具有与该 div 完全相同的大小和位置。)
我们还将提供一些基本的 CSS 规则,以防在初始化期间出现任何错误。
body {
/* make the body fit our viewport */
position: relative;
width: 100%;
height: 100vh;
margin: 0;
/* hide scrollbars */
overflow: hidden;
}
#canvas {
/* make the canvas wrapper fit the window */
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100vh;
}
.plane {
/* define the size of your plane */
width: 80%;
max-width: 1400px;
height: 80vh;
position: relative;
top: 10vh;
margin: 0 auto;
}
.plane img {
/* hide the img element */
display: none;
}
/*** in case of error show the image ***/
.no-curtains .plane {
overflow: hidden;
display: flex;
align-items: center;
justify-content: center;
}
.no-curtains .plane img {
display: block;
max-width: 100%;
object-fit: cover;
}
JavaScript
JavaScript 中的工作稍微多一些。我们需要实例化 WebGL 上下文,使用统一参数创建平面并使用它。对于此第一个示例,我们还将了解如何捕获错误。
window.onload = function() {
// pass the id of the div that will wrap the canvas to set up our WebGL context and append the canvas to our wrapper
var webGLCurtain = new Curtains("canvas");
// if there's any error during init, we're going to catch it here
webGLCurtain.onError(function() {
// we will add a class to the document body to display original images
document.body.classList.add("no-curtains");
});
// get our plane element
var planeElement = document.getElementsByClassName("plane")[0];
// set our initial parameters (basic uniforms)
var params = {
vertexShaderID: "plane-vs", // our vertex shader ID
fragmentShaderID: "plane-fs", // our fragment shader ID
uniforms: {
time: {
name: "uTime", // uniform name that will be passed to our shaders
type: "1f", // this means our uniform is a float
value: 0,
},
}
}
// create our plane mesh
var plane = webGLCurtain.addPlane(planeElement, params);
// if our plane has been successfully created
// we use the onRender method of our plane fired at each requestAnimationFrame call
plane && plane.onRender(function() {
plane.uniforms.time.value++; // update our time uniform value
});
}
着色器
我们需要编写顶点着色器。基本上,我们需要根据模型视图和投影矩阵来定位我们的平面,并将变化变量传递给片段着色器。
其中一个变化变量称为vTextureCoord
,用于在片段着色器中将我们的纹理映射到平面上。我们可以直接传递我们的aTextureCoord
属性,但这样会得到一个拉伸的纹理,因为我们的平面和图像不一定具有相同的纵横比。幸运的是,库提供了一个纹理矩阵统一变量,我们可以使用它来计算新的坐标,这些坐标将裁剪纹理,使其始终适合平面(将其视为等效的background-size: cover
)。
<!-- vertex shader -->
<script id="plane-vs" type="x-shader/x-vertex">
#ifdef GL_ES
precision mediump float;
#endif
// those are the mandatory attributes that the lib sets
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
// those are mandatory uniforms that the lib sets and that contain our model view and projection matrix
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
// our texture matrix uniform (this is the lib default name, but it could be changed)
uniform mat4 uTextureMatrix0;
// if you want to pass your vertex and texture coords to the fragment shader
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
void main() {
// get the vertex position from its attribute
vec3 vertexPosition = aVertexPosition;
// set its position based on projection and model view matrix
gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0);
// set the varying variables
// thanks to the texture matrix we will be able to calculate accurate texture coords
// so that our texture will always fit our plane without being distorted
vTextureCoord = (uTextureMatrix0 * vec4(aTextureCoord, 0.0, 1.0)).xy;
vVertexPosition = vertexPosition;
}
</script>
现在,进入我们的片段着色器。在这里,我们将根据我们的时间统一变量和纹理坐标添加一点位移效果。
<!-- fragment shader -->
<script id="plane-fs" type="x-shader/x-fragment">
#ifdef GL_ES
precision mediump float;
#endif
// get our varying variables
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
// the uniform we declared inside our javascript
uniform float uTime;
// our texture sampler (this is the lib default name, but it could be changed)
uniform sampler2D uSampler0;
void main() {
// get our texture coords
vec2 textureCoord = vTextureCoord;
// displace our pixels along both axis based on our time uniform and texture UVs
// this will create a kind of water surface effect
// try to comment a line or change the constants to see how it changes the effect
// reminder : textures coords are ranging from 0.0 to 1.0 on both axis
const float PI = 3.141592;
textureCoord.x += (
sin(textureCoord.x * 10.0 + ((uTime * (PI / 3.0)) * 0.031))
+ sin(textureCoord.y * 10.0 + ((uTime * (PI / 2.489)) * 0.017))
) * 0.0075;
textureCoord.y += (
sin(textureCoord.y * 20.0 + ((uTime * (PI / 2.023)) * 0.023))
+ sin(textureCoord.x * 20.0 + ((uTime * (PI / 3.1254)) * 0.037))
) * 0.0125;
gl_FragColor = texture2D(uSampler0, textureCoord);
}
</script>
瞧!您都完成了,如果一切顺利,您应该会看到类似这样的内容。
查看 CodePen 上 Martin Laxenaire (@martinlaxenaire) 编写的 curtains.js 基本平面。
添加3D和交互
好吧,到目前为止这很酷,但是我们在这篇文章的开头谈到了3D和交互,所以让我们看看如何添加它们。
关于顶点
要添加3D效果,我们必须更改顶点着色器内部的平面顶点位置。但是,在我们的第一个示例中,我们没有指定平面应该有多少个顶点,因此它使用包含六个顶点的默认几何体创建,形成两个三角形

为了获得体面的3D动画,我们需要更多三角形,从而需要更多顶点。

重构我们的 JavaScript
幸运的是,指定平面定义很容易,因为它可以在我们的初始参数中设置。
我们还将监听鼠标位置以添加一些交互。要正确执行此操作,我们必须等到平面准备就绪,将鼠标文档坐标转换为 WebGL 剪辑空间坐标,并将它们作为统一变量发送到着色器。
// we are using window onload event here but this is not mandatory
window.onload = function() {
// track the mouse positions to send it to the shaders
var mousePosition = {
x: 0,
y: 0,
};
// pass the id of the div that will wrap the canvas to set up our WebGL context and append the canvas to our wrapper
var webGLCurtain = new Curtains("canvas");
// get our plane element
var planeElement = document.getElementsByClassName("plane")[0];
// set our initial parameters (basic uniforms)
var params = {
vertexShaderID: "plane-vs", // our vertex shader ID
fragmentShaderID: "plane-fs", // our framgent shader ID
widthSegments: 20,
heightSegments: 20, // we now have 20*20*6 = 2400 vertices !
uniforms: {
time: {
name: "uTime", // uniform name that will be passed to our shaders
type: "1f", // this means our uniform is a float
value: 0,
},
mousePosition: { // our mouse position
name: "uMousePosition",
type: "2f", // notice this is a length 2 array of floats
value: [mousePosition.x, mousePosition.y],
},
mouseStrength: { // the strength of the effect (we will attenuate it if the mouse stops moving)
name: "uMouseStrength", // uniform name that will be passed to our shaders
type: "1f", // this means our uniform is a float
value: 0,
},
}
}
// create our plane mesh
var plane = webGLCurtain.addPlane(planeElement, params);
// if our plane has been successfully created we could start listening to mouse/touch events and update its uniforms
plane && plane.onReady(function() {
// set a field of view of 35 to exaggerate perspective
// we could have done it directly in the initial params
plane.setPerspective(35);
// listen our mouse/touch events on the whole document
// we will pass the plane as second argument of our function
// we could be handling multiple planes that way
document.body.addEventListener("mousemove", function(e) {
handleMovement(e, plane);
});
document.body.addEventListener("touchmove", function(e) {
handleMovement(e, plane);
});
}).onRender(function() {
// update our time uniform value
plane.uniforms.time.value++;
// continually decrease mouse strength
plane.uniforms.mouseStrength.value = Math.max(0, plane.uniforms.mouseStrength.value - 0.0075);
});
// handle the mouse move event
function handleMovement(e, plane) {
// touch event
if(e.targetTouches) {
mousePosition.x = e.targetTouches[0].clientX;
mousePosition.y = e.targetTouches[0].clientY;
}
// mouse event
else {
mousePosition.x = e.clientX;
mousePosition.y = e.clientY;
}
// convert our mouse/touch position to coordinates relative to the vertices of the plane
var mouseCoords = plane.mouseToPlaneCoords(mousePosition.x, mousePosition.y);
// update our mouse position uniform
plane.uniforms.mousePosition.value = [mouseCoords.x, mouseCoords.y];
// reassign mouse strength
plane.uniforms.mouseStrength.value = 1;
}
}
现在我们的 JavaScript 已完成,我们必须重写着色器,以便它们使用我们的鼠标位置统一变量。
重构着色器
让我们首先看看我们的顶点着色器。我们有三个统一变量可用于我们的效果。
- 时间,它在不断增加。
- 鼠标位置。
- 我们的鼠标强度,它在每次鼠标移动后都会持续下降。
我们将使用这三个统一变量来创建一种3D波纹效果。
<script id="plane-vs" type="x-shader/x-vertex">
#ifdef GL_ES
precision mediump float;
#endif
// those are the mandatory attributes that the lib sets
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
// those are mandatory uniforms that the lib sets and that contain our model view and projection matrix
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
// our texture matrix uniform (this is the lib default name, but it could be changed)
uniform mat4 uTextureMatrix0;
// our time uniform
uniform float uTime;
// our mouse position uniform
uniform vec2 uMousePosition;
// our mouse strength
uniform float uMouseStrength;
// if you want to pass your vertex and texture coords to the fragment shader
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
void main() {
vec3 vertexPosition = aVertexPosition;
// get the distance between our vertex and the mouse position
float distanceFromMouse = distance(uMousePosition, vec2(vertexPosition.x, vertexPosition.y));
// this will define how close the ripples will be from each other. The bigger the number, the more ripples you'll get
float rippleFactor = 6.0;
// calculate our ripple effect
float rippleEffect = cos(rippleFactor * (distanceFromMouse - (uTime / 120.0)));
// calculate our distortion effect
float distortionEffect = rippleEffect * uMouseStrength;
// apply it to our vertex position
vertexPosition += distortionEffect / 15.0;
gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0);
// varying variables
// thanks to the texture matrix we will be able to calculate accurate texture coords
// so that our texture will always fit our plane without being distorted
vTextureCoord = (uTextureMatrix0 * vec4(aTextureCoord, 0.0, 1.0)).xy;
vVertexPosition = vertexPosition;
}
</script>
至于片段着色器,我们将保持简单。我们将根据每个顶点位置模拟灯光和阴影。
<script id="plane-fs" type="x-shader/x-fragment">
#ifdef GL_ES
precision mediump float;
#endif
// get our varying variables
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
// our texture sampler (this is the lib default name, but it could be changed)
uniform sampler2D uSampler0;
void main() {
// get our texture coords
vec2 textureCoords = vTextureCoord;
// apply our texture
vec4 finalColor = texture2D(uSampler0, textureCoords);
// fake shadows based on vertex position along Z axis
finalColor.rgb -= clamp(-vVertexPosition.z, 0.0, 1.0);
// fake lights based on vertex position along Z axis
finalColor.rgb += clamp(vVertexPosition.z, 0.0, 1.0);
// handling premultiplied alpha (useful if we were using a png with transparency)
finalColor = vec4(finalColor.rgb * finalColor.a, finalColor.a);
gl_FragColor = finalColor;
}
</script>
就是这样!
查看 Martin Laxenaire 在 CodePen 上创建的笔 curtains.js 波纹效果示例(@martinlaxenaire)。
通过这两个简单的示例,我们了解了如何创建平面并在其上进行交互。
视频和置换着色器
我们的最后一个示例将使用置换着色器来增强过渡效果,创建一个基本的全屏视频幻灯片。
置换着色器概念
置换着色器将创建一种不错的失真效果。它将使用灰度图像写入我们的片段着色器中,并根据纹理 RGB 值偏移视频的像素坐标。这是我们将使用的图像

效果将根据每个像素的 RGB 值计算,黑色像素为[0, 0, 0]
,白色像素为[1, 1, 1]
(GLSL 中等效于[255, 255, 255]
)。为了简化,我们将只使用红色通道值,因为在灰度图像中,红色、绿色和蓝色始终相等。
您可以尝试创建自己的灰度图像(它与几何形状搭配效果很好),以获得独特的过渡效果。
多个纹理和视频
只需添加多个图像标签,平面就可以拥有多个纹理。这次,我们想要使用视频而不是图像。我们只需要用<video />
标签替换<img />
标签。但是,在使用视频时需要注意两点
- 在移动设备上,如果没有用户操作(例如点击事件),我们无法自动播放视频。因此,最好添加一个“进入站点”按钮来显示和启动我们的视频。
- 视频可能会对内存、性能和带宽产生很大影响。您应尽量使视频尽可能轻量。
HTML
HTML 仍然非常简单。我们将创建画布 div 包装器、包含纹理的平面 div 和一个触发视频自动播放的按钮。请注意图像和视频标签上data-sampler
属性的使用——它在我们的着色器中将非常有用。
<body>
<div id="canvas"></div>
<!-- our plane -->
<div class="plane">
<!-- notice here we are using the data-sampler attribute to name our sampler uniforms -->
<img src="path/to/displacement.jpg" data-sampler="displacement" />
<video src="path/to/video.mp4" data-sampler="firstTexture"></video>
<video src="path/to/video-2.mp4" data-sampler="secondTexture"></video>
</div>
<div id="enter-site-wrapper">
<span id="enter-site">
Click to enter site
</span>
</div>
</body>
CSS
样式表将处理一些事情:在用户进入站点之前显示按钮并隐藏画布,并使我们的plane
div 适应窗口。
@media screen {
body {
margin: 0;
font-size: 18px;
font-family: 'PT Sans', Verdana, sans-serif;
background: #212121;
line-height: 1.4;
height: 100vh;
width: 100vw;
overflow: hidden;
}
/*** canvas ***/
#canvas {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100vh;
z-index: 10;
/* hide the canvas until the user clicks the button */
opacity: 0;
transition: opacity 0.5s ease-in;
}
/* display the canvas */
.video-started #canvas {
opacity: 1;
}
.plane {
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 15;
/* tell the user he can click the plane */
cursor: pointer;
}
/* hide the original image and videos */
.plane img, .plane video {
display: none;
}
/* center the button */
#enter-site-wrapper {
display: flex;
justify-content: center;
align-items: center;
align-content: center;
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 30;
/* hide the button until everything is ready */
opacity: 0;
transition: opacity 0.5s ease-in;
}
/* show the button */
.curtains-ready #enter-site-wrapper {
opacity: 1;
}
/* hide the button after the click event */
.curtains-ready.video-started #enter-site-wrapper {
opacity: 0;
pointer-events: none;
}
#enter-site {
padding: 20px;
color: white;
background: #ee6557;
max-width: 200px;
text-align: center;
cursor: pointer;
}
}
JavaScript
至于 JavaScript,我们将按以下方式进行
- 设置几个变量来存储幻灯片状态。
- 创建 Curtains 对象并向其中添加平面。
- 当平面准备就绪时,侦听点击事件以开始我们的视频播放(请注意
playVideos()
方法的使用)。添加另一个点击事件以在两个视频之间切换。 - 在
onRender()
方法中更新我们的过渡计时器统一变量。
window.onload = function() {
// here we will handle which texture is visible and the timer to transition between images
var activeTexture = 1;
var transitionTimer = 0;
// set up our WebGL context and append the canvas to our wrapper
var webGLCurtain = new Curtains("canvas");
// get our plane element
var planeElements = document.getElementsByClassName("plane");
// some basic parameters
var params = {
vertexShaderID: "plane-vs",
fragmentShaderID: "plane-fs",
imageCover: false, // our displacement texture has to fit the plane
uniforms: {
transitionTimer: {
name: "uTransitionTimer",
type: "1f",
value: 0,
},
},
}
var plane = webGLCurtain.addPlane(planeElements[0], params);
// if our plane has been successfully created
plane && plane.onReady(function() {
// display the button
document.body.classList.add("curtains-ready");
// when our plane is ready we add a click event listener that will switch the active texture value
planeElements[0].addEventListener("click", function() {
if(activeTexture == 1) {
activeTexture = 2;
}
else {
activeTexture = 1;
}
});
// click to play the videos
document.getElementById("enter-site").addEventListener("click", function() {
// display canvas and hide the button
document.body.classList.add("video-started");
// play our videos
plane.playVideos();
}, false);
}).onRender(function() {
// increase or decrease our timer based on the active texture value
// at 60fps this should last one second
if(activeTexture == 2) {
transitionTimer = Math.min(60, transitionTimer + 1);
}
else {
transitionTimer = Math.max(0, transitionTimer - 1);
}
// update our transition timer uniform
plane.uniforms.transitionTimer.value = transitionTimer;
});
}
着色器
这就是所有神奇之处发生的地方。就像我们的第一个示例一样,顶点着色器不会做太多事情,您需要专注于将创建“深入”效果的片段着色器。
<script id="plane-vs" type="x-shader/x-vertex">
#ifdef GL_ES
precision mediump float;
#endif
// default mandatory variables
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
// our texture matrices
// notice how it matches our data-sampler attributes + "Matrix"
uniform mat4 firstTextureMatrix;
uniform mat4 secondTextureMatrix;
// varying variables
varying vec3 vVertexPosition;
// our displacement texture will use original texture coords attributes
varying vec2 vTextureCoord;
// our videos will use texture coords based on their texture matrices
varying vec2 vFirstTextureCoord;
varying vec2 vSecondTextureCoord;
// custom uniforms
uniform float uTransitionTimer;
void main() {
vec3 vertexPosition = aVertexPosition;
gl_Position = uPMatrix * uMVMatrix * vec4(vertexPosition, 1.0);
// varying variables
// texture coords attributes because we want our displacement texture to be contained
vTextureCoord = aTextureCoord;
// our videos texture coords based on their texture matrices
vFirstTextureCoord = (firstTextureMatrix * vec4(aTextureCoord, 0.0, 1.0)).xy;
vSecondTextureCoord = (secondTextureMatrix * vec4(aTextureCoord, 0.0, 1.0)).xy;
// vertex position as usual
vVertexPosition = vertexPosition;
}
</script>
<script id="plane-fs" type="x-shader/x-fragment">
#ifdef GL_ES
precision mediump float;
#endif
// all our varying variables
varying vec3 vVertexPosition;
varying vec2 vTextureCoord;
varying vec2 vFirstTextureCoord;
varying vec2 vSecondTextureCoord;
// custom uniforms
uniform float uTransitionTimer;
// our textures samplers
// notice how it matches our data-sampler attributes
uniform sampler2D firstTexture;
uniform sampler2D secondTexture;
uniform sampler2D displacement;
void main( void ) {
// our texture coords
vec2 textureCoords = vTextureCoord;
// our displacement texture
vec4 displacementTexture = texture2D(displacement, textureCoords);
// our displacement factor is a float varying from 1 to 0 based on the timer
float displacementFactor = 1.0 - (cos(uTransitionTimer / (60.0 / 3.141592)) + 1.0) / 2.0;
// the effect factor will tell which way we want to displace our pixels
// the farther from the center of the videos, the stronger it will be
vec2 effectFactor = vec2((textureCoords.x - 0.5) * 0.75, (textureCoords.y - 0.5) * 0.75);
// calculate our displaced coordinates of the first video
vec2 firstDisplacementCoords = vec2(vFirstTextureCoord.x - displacementFactor * (displacementTexture.r * effectFactor.x), vFirstTextureCoord.y- displacementFactor * (displacementTexture.r * effectFactor.y));
// opposite displacement effect on the second video
vec2 secondDisplacementCoords = vec2(vSecondTextureCoord.x - (1.0 - displacementFactor) * (displacementTexture.r * effectFactor.x), vSecondTextureCoord.y - (1.0 - displacementFactor) * (displacementTexture.r * effectFactor.y));
// apply the textures
vec4 firstDistortedColor = texture2D(firstTexture, firstDisplacementCoords);
vec4 secondDistortedColor = texture2D(secondTexture, secondDisplacementCoords);
// blend both textures based on our displacement factor
vec4 finalColor = mix(firstDistortedColor, secondDistortedColor, displacementFactor);
// handling premultiplied alpha
finalColor = vec4(finalColor.rgb * finalColor.a, finalColor.a);
// apply our shader
gl_FragColor = finalColor;
}
</script>
这是我们的小型视频幻灯片,带有很酷的过渡效果
查看 Martin Laxenaire 在 CodePen 上创建的笔 curtains.js 视频幻灯片(@martinlaxenaire)。
此示例是向您展示如何使用 curtains.js 创建幻灯片的好方法:您可能希望使用图像而不是视频、更改置换纹理、修改片段着色器……
我们还可以添加更多幻灯片,但随后我们需要处理纹理交换。我们在这里不会介绍它,但您应该知道库网站上有一个处理它的示例。
深入了解
我们只是触及了 curtains.js 可能性的表面。例如,您可以尝试创建多个平面,为文章缩略图创建很酷的鼠标悬停效果。可能性几乎是无限的。