From GLSL to WGSL: the future of shaders on the Web
How does the new and shiny WebGPU Shading Language compares to the seasoned GLSL
WebGPU ships with Chrome soon. Now is a good time to have a look at what shaders will increasingly look like in the next 10 years — if everything goes according to plan. First what is it, how it looks like and how to switch your mindset coming from GLSL.
A companion language for the new WebGPU API
History
After some discussion a consensus was found: a new text based language needed to be created while staying bijective to SPIR-V. Long story short Apple has issues with Khronos (more on this in my previous article Graphics on the web and beyond with WebGPU.
Even though it is currently possible to use GLSL via @webgpu/glslang, WGSL will be the way to go in JavaScript. As WGSL can be interpreted not only on the web but also in Rust, C/C++ implementations, we might as well rip off the bandaid and avoid fragmenting the ecosystem by using too many different languages.
What it means for shaders in the future
In my previous article, I also talk about what a new language implies for lib authors and end developers that write shaders:
New language also means we’ll need to rewrite all of our shaders, yay. Hopefully, with the help of good tooling developed by the WebGPU Community Group and thanks to the similarities between HLSL, GLSL and WGSL (maybe), this task should become trivial (finger crossed).
Well, I was a tad optimistic there but help is on the way with naga and maybe we should also take that as an opportunity to upgrade our engines and move more logic to compute in order to take performances to the next level like WebGL did a decade ago.
If the name sounds the same, at the end of the day, it is quite different.
WGSL and GLSL: comparing the syntaxes’ basics
Overall the syntaxes are similar — WGSL is a shading language so you still talk to the GPUs in more or less the same fashion — but with some quirks especially coming from the Web flavour of GLSL. I have implemented the ability to use both WGSL and GLSL (4.5) in my WebGPU engine DGEL so I’ll share my findings here. Let’s just do a one-to-one feature comparisons.
Scalar and matrix types
First off, even though the specs says that Plain types in WGSL are similar to Plain-Old-Data types in C++, you also very much feel the influence of Rust in the design.
The bool
(true/false
) behaves the same but for other scalars it seems somehow very important for you to know that they are 32-bit:
| WGSL | GLSL |
| :---- | :------- |
| `i32` | `int` |
| `u32` | `uint` |
| `f32` | `float` |
| `N/A` | `double` |
Although we rarely used bvec/ivec/uvec
in our shaders, defining vectors was concise enough when using vec2/3/4
. With WGSL, the component type needs to be specified in angle brackets eg. a vector with 4 int elements is vec4<i32>
. Very explicit.
If you were thinking of rejecting this new type syntax and just alias them, well sorry to disappoint but type vec4 = vec4<f32>;
will fail as vec4
is a reserved word. You could go for type float4 = vec4<f32>;
but that's not very GLSL-y. The hopes of not having to rewrite GLSL shaders stop here.
The matrix types are also very verbose: matNxM<f32>
with N columns and M rows (2, 3, 4 each) of floats. No shorthand here:
| WGSL | GLSL |
| :------------ | :------- |
| `mat2x2<f32>` | `mat2` |
| `mat3x2<f32>` | `mat3x2` |
| `mat4x2<f32>` | `mat4x2` |
| `mat2x3<f32>` | `mat2x3` |
| `mat3x3<f32>` | `mat3` |
| `mat4x3<f32>` | `mat4x3` |
| `mat2x4<f32>` | `mat2x4` |
| `mat3x4<f32>` | `mat3x4` |
| `mat4x4<f32>` | `mat4` |
It is also not possible to generate the identity matrix as easily as with mat4(1.0)
in GLSL. At least, they are still column-major.
At the time of writing, you can only pass vectors to construct the matrix (eg. mat2x2<f32>(vec2<f32>(1.0, 0.0), vec2<f32>(0.0,1.0));
) but spec is on the way to allow floats directly, like we do in GLSL.
Arrays also have explicit types, array<E,N>
, so an array of 8 vector 2 of type float will be array<vec2<f32>, 8>
.
Structs
Good news everyone, apart obviously from the members declarations, the block structure is the same.
- WGSL:
struct Light {
position: vec3<f32>,
color: vec4<f32>,
attenuation: f32,
direction: vec3<f32>,
innerAngle: f32,
angle: f32,
range: f32,
};
- GLSL:
struct Light {
vec3 position;
vec4 color;
float attenuation;
vec3 direction;
float innerAngle;
float angle;
float range;
};
Hard to get it wrong here, just make sure to end each members line with a semicolon and not a comma if you are used to Rust structs.
Uniform buffer object
WebGL2 (GLSL 300 ES) gave us UBOs as a mean to reduce the amount of uniform bindings and the load on GPUs, making things generally faster.
The main difference here is the need to declare a struct for it instead of using layout()
. A set
is called a group
while binding has the same name. Overall this feels more readable.
- WGSL:
struct SystemUniform {
projectionMatrix: mat4x4<f32>,
viewMatrix: mat4x4<f32>,
inverseViewMatrix: mat4x4<f32>,
cameraPosition: vec3<f32>,
time: f32,
};
@group(0) @binding(0) var<uniform> system: SystemUniform;
- GLSL:
layout(set = 0, binding = 0) uniform SystemUniform {
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 inverseViewMatrix;
vec3 cameraPosition;
float time;
} system;
We’ll talk about this var
below.
Functions declarations
Here is another area where Rust left its mark. For a GLSL user, put your left eyeball to the right and vice-versa as the types (return and arguments) are switched:
- WGSL:
fn saturate(x: f32) -> f32 {
return clamp(x, 0.0, 1.0);
}
- GLSL:
float saturate(float x) {
return clamp(x, 0.0, 1.0);
}
// or
#define saturate(x) clamp(x, 0.0, 1.0)
// but more on that later
Built-in
| WGSL | Stage | IO | GLSL |
| ---------------------: | ------: | :-- | :---------------------- |
| vertex_index | vertex | in | gl_VertexID |
| instance_index | vertex | in | gl_InstanceID |
| position | vertex | out | gl_Position |
| position | frag | in | gl_FragCoord |
| front_facing | frag | in | gl_FrontFacing |
| frag_depth | frag | out | gl_FragDepth |
| local_invocation_id | compute | in | gl_LocalInvocationID |
| local_invocation_index | compute | in | gl_LocalInvocationIndex |
| global_invocation_id | compute | in | gl_GlobalInvocationID |
| workgroup_id | compute | in | gl_WorkGroupID |
| num_workgroups | compute | in | gl_NumWorkGroups |
| sample_index | frag | in | gl_SampleID |
| sample_mask | frag | in | gl_SampleMask |
| sample_mask | frag | out | gl_SampleMask |
Looks pretty similar to GLSL here, you just need to pass it to your main entry point function (instance_index
) and define it in its returned value (Output struct position
):
// UBOs
struct SystemUniform {
projectionMatrix: mat4x4<f32>,
viewMatrix: mat4x4<f32>,
};
@group(0) @binding(0) var<uniform> system: SystemUniform;struct MeshUniform {
modelMatrix: array<mat4x4<f32>, 256>,
};
@group(1) @binding(0) var<uniform> mesh: MeshUniform;// Output
struct Output {
@builtin(position) position: vec4<f32>,
};[[stage(vertex)]]
fn main(
@builtin(instance_index) instance_index: u32,
@location(0) position: vec3<f32>
) -> Output {
var output: Output; let modelMatrix = mesh.modelMatrix[instance_index]; output.position = system.projectionMatrix * system.viewMatrix * modelMatrix * vec4<f32>(position, 1.0); return output;
}
Considered a statement, discard
will also works the same as in GLSL.
A quick guide on how not to hate rewriting all your shaders
Alright after a look at the basics, let me provide a non-exhaustive list of divergences that will definitely bug you when working with WGSL coming from GLSL.
var/let
So we said goodbye to scalars as variable qualifiers in favour of var
and let
. Great like in JavaScript, so there's a const
too right? No. It is only a Reserved Words for now. If you are looking for immutability, go for let
:
let GAMMA: f32 = 2.2;
One of the goals with var name: type
was apparently to make it closer to TypeScript. After some complaints, type inference has been added for the sake of conciseness and readability so you can write var position = vec2<f32>(0.0, 0.0);
instead of var position: vec2<f32> = vec2<f32>(0.0, 0.0);
No preprocessor (#define/#ifdef/#if defined())
The assumption here is that preprocessing will happen on the client side, for instance with string replacement (hello #include
) or with template strings in JavaScript. The later is what I do in dgel but it proved to be less flexible and harder to quickly test different implementations without recompiling a complete shader. On the other hand, that means less extra code in shaders so they might be more specialised and easier to read once pre-processed.
<f32> everywhere
32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit, 32-bit.
It’s basically like seeing this
in a JavaScript class. I am just going to ask a question here: same arguments as for type inference, why not simplify the most commonly used type and where is the middle ground in term of readability vs explicitness?
Arithmetic and assignments operators, l-values swizzling: where are you
Assigning gets more verbose when comparing with GLSL swizzling:
vec4 color = vec4(1.0);
color.xyz = vec3(0.1, 0.2, 0.3);
vs
var color = vec4<f32>(1.0);
color = vec4<f32>(0.1, 0.2, 0.3, color.a);
The current spec is missing the important assignment operators (+=
, -=
, *=
, /=
, %=
...) as well as no increment (++
), decrement (--
) or exponentiation (**
).
That will really challenge your will to go through your shaders.
Note: initially delayed, it might not be a lost cause be after all.
Branching: elseif vs else if? no ternary? no optional braces?
elseif/elsif/elif
Although initially stated as elseif
it looks like we're going towards the familiar else if
.
?
Another huge pain point when coming from GLSL and JavaScript, the ternary operator is replaced with a built-in select
. all/any
(equivalent to Array.every/some
in JavaScript) are good additions though.
See more here: https://www.w3.org/TR/WGSL/#logical-builtin-functions
Brace
Bracing is mandatory, so:
if (diff <= 0.0) return vec3(0.0);
will have to be expanded to:
if (diff <= 0.0) {
return vec3<f32>(0.0);
}
functions overload
I saved the best till last. One very common thing we do in GLSL is overloading but that’s not possible in WGSL. Eg. the following GLSL code:
float toLinear(float v) {
return pow(v, GAMMA);
}
vec2 toLinear(vec2 v) {
return pow(v, vec2(GAMMA));
}
vec3 toLinear(vec3 v) {
return pow(v, vec3(GAMMA));
}
vec4 toLinear(vec4 v) {
return vec4(toLinear(v.rgb), v.a);
}
won’t translate to:
// WILL THROW
fn toLinear(v: f32) -> f32 {
return pow(v, GAMMA);
}
fn toLinear(v: vec2<f32>) -> vec2<f32> {
return pow(v, vec2<f32>(GAMMA));
}
fn toLinear(v: vec3<f32>) -> vec3<f32> {
return pow(v, vec3<f32>(GAMMA));
}
fn toLinear(v: vec4<f32>) -> vec4<f32> {
return vec4<f32>(toLinear(v.rgb), v.a);
}
You’ll need to find a naming convention such as toLinear
, toLinear2
, toLinear3
, toLinear4
.
Note: delayed to post MVP.
Conclusion
There you have it: not the friendliest to write but more explicit than GLSL. I hope a lot of the above shortcomings will be resolved in a 1.1 version after MVP lands.
WGSL is just fresh from the oven and the bakers are creating the recipe as they go but it is based on a good cookbook from experienced chefs. A lot is still in motion and it is fascinating to see the process unroll publicly on GitHub with people coming from different backgrounds expressing their views on what the language should be.
Having different applications on many target environments, you obviously can’t please everyone, but WGSL is looking like a decent compromise for the future of cross platforms graphics, especially on the Web.