3D.Compute.Plugin.Shader

The Shader plugin node executes user-provided GLSL vertex and fragment shaders and thereby allows creating a customized compute node for 3D rendering within the GSN Composer.

To this end, a web-based GLSL editor and validator is provided that is similar to other online GLSL tools, such as: ShaderFrog, Shdr, WebGLStudio, Kick.js Shader Editor Firefox WebGL Shader Editor, etc.

The main difference is that the GSN Composer is an online node-based visual programming environment, which makes it very simple and convenient to provide the uniform variables and 3D meshes for the shaders. For every uniform variable that is created within the custom GLSL shader code, an input slot is added to the shader node, which can be connected to other nodes of the dataflow graph. This makes shader development very fast and intuitive and frees the developer of writing many lines of support code to fill the uniform variables with values. 3D meshes of standard primitives can be generated with the corresponding nodes (e.g. cubes, spheres, cylinders, cones, etc.). Furthermore, custom 3D meshes can be uploaded in Wavefront OBJ format.

The Shader plugin node, described here, is intended for 3D renderings using vertex and fragment shader code. If you are new to shaders, a simpler starting point might be generating image shaders with the ImageShader plugin node of the GSN Composer. It is a bits less complicated because typically fewer inputs are required and only fragment shader code needs to be written. Well known web-based tools such as ShaderToy or GLSL Sandbox are also image shaders and the examples on these websites can be a great source of inspiration.

If the Shader node is selected in the graph area, the Edit Code button in the Nodes panel can be clicked. A dialog appears in which GLSL vertex and fragment shader code can be entered.

shader_plugin_editor_screenshot

In order to get started with shader programming, the first examples given here follow closely Chapter 9 of the lecture on graphics programming given at the University of Marburg. Other chapters of the lecture might be helpful if you are new to OpenGL programming in general.

First Shader

Vertex shader:
attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

varying vec2 tc; // output texture coordinate of vertex
varying vec3 fn; // output fragment normal of vertex

void main(){
  tc = texcoord;
  fn = normal;
  gl_Position = vec4(0.5 * position, 1.0);
}
Fragment shader:
precision mediump float;
varying vec2 tc; // texture coordinate of pixel (interpolated)
varying vec3 fn; // fragment normal of pixel (interpolated)

void main() {
  gl_FragColor = vec4(tc.x, tc.y, 0.0, 1.0);
}

The mesh that is passed into this shader contains a triangle with three vertices. The vertex position are (-1.0, -1.0, 0.0), (1.0, -1.0, 0.0), and (0.0, 1.0, 0.0). Corresponding texture coordinates are (0.0, 0.0), (1.0, 0.0), (0.5, 1.0). The three vertex normals are all pointing in positive z-direction (0.0, 0.0, 1.0).

The vertex shader is called for each vertex of the input mesh and the current vertex position, texture coordinate and normal are passed to the shader by the corresponding attribute variables in the first three lines of the vertex shader code. Input texture coordinates and normal are handed over without modifications to the corresponding output varying variables. The build-in variable gl_Position is assigned the output position of the vertex. In this example, the input position is scaled by 0.5.

The fragment shader is called for each pixel of the output image. Therefore, is is also often referred to as "pixel shader". The texture coordinates and normal from the vertex shader are interpolated by the rasterizer in OpenGL's internal pipeline and the values of the current pixel are passed by the varying variables. The build-in variable gl_FragColor receives the output color of the pixel. Here, the x- and y-coordinate of the texture vector are assigned to the red and green color channel, respectively.

Uniform variables

Uniform variables are used to pass data that remains constant for all vertices/fragments during a rending pass. For each uniform variable in the GLSL code, a corresponding slot with the same name is created at the input of the shader node. The following table enlists the supported GLSL uniform types and the matching GSN data nodes that can be connected to the corresponding slot.

GLSL uniform typeGSN data node
uniform intPublicParameter.Data.Integer
uniform floatPublicParameter.Data.Float
uniform boolPublicParameter.Data.Boolean
uniform vec2Matrix.Data.Matrix
uniform vec3Matrix.Data.Matrix
uniform vec4PublicParameter.Data.Color
uniform mat4Matrix.Data.Matrix
uniform sampler2DImageProcessing.Data.Image

An exception occurs if a variable of type "uniform sampler2D" is created and another variable of type "uniform int" that starts with same variable name plus "Width" or "Height". In this case, the integer variable is not exposed as an input slot. Instead the width and height properties are gathered from the image.

Similarly, an exception occurs if a variable of type "uniform mat4" is created and another variable of type "uniform mat4" that starts with same variable name plus "Inverse", "Transposed" or "TransposedInverse". In these cases, the mat4 variable is not exposed as an input slot but the values are computed from the input matrix with the corresponding variable name.

A mesh can contain multiple mesh groups. In order to access the information to which mesh group the current vertex or fragment belongs, the uniform variable "gsnMeshGroup" can be used. This special uniform variable is also not exposed as a slot.

A description and a default value can be set within a comment in the same line after the definition of a uniform variable:

uniform int time; // description="Current time" defaultval="0"
uniform float scale; // description="A floating-point scaling value" defaultval="1.0"
uniform vec4 col; // description="Input color" defaultval="1.0, 0.0, 1.0, 1.0"
uniform sampler2D img; // description="Input image"
uniform int imgWidth; // not exposed as slot but gathered from "img"
uniform int imgHeight; // not exposed as slot but gathered from "img"
uniform mat4 modelview; // description="modeview matrix"
uniform mat4 modelviewTransposedInverse; // not exposed as slot, computed from "modelview"
uniform int gsnMeshGroup; // not exposed as slot

As an example, let's pass an OpenGL modelview and projection matrix as uniform variables to render a triangle with a rotating perspective camera. To this end, we extend the code from the initial example by two uniform variables. Once the change is applied in the shader code editor, two new input slots are created and matrix data nodes can be connected. The modelview matrix transforms the mesh from its local coordinate system into the camera coordinate system and the projection matrix performs the camera's perspective projection.

Vertex shader:
attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

// uniform variables are exposed as node slots
uniform mat4 projection; //description="camera projection matrix"
uniform mat4 modelview; // description="modelview transformation"

varying vec2 tc; // output texture coordinate of vertex
varying vec3 fn; // output fragment normal of vertex

void main(){
  tc = texcoord;
  fn = normal;
  gl_Position =  projection * modelview * vec4(position, 1.0);
}
Fragment shader:
precision mediump float;
varying vec2 tc; // texture coordinate of pixel (interpolated)
varying vec3 fn; // fragment normal of pixel (interpolated)

void main() {
  gl_FragColor = vec4(tc.x, tc.y, 0.0, 1.0);
}

Transforming surface normals

When transforming surface normals, it should be considered that a normal is a unit vector that is neither translated nor scaled during a transformation with the modelview matrix. It should be only rotated. As shown in Chapter 9 of the lecture, the required transformation matrix is the transposed inverse of the modelview matrix.

This is shown in the following example. Different outputs can be selected with the a uniform variable "mode":

Vertex shader:
attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

// uniform variables are exposed as node slots
uniform int mode; // description="select mode" defaultval="1"
uniform mat4 projection; //description="camera projection matrix"
uniform mat4 modelview; // description="modelview transformation"

// transposed inverse modelview matrix
// (not exposed as slot but computed from "modelview")
uniform mat4 modelviewTransposedInverse; 

varying vec4 forFragColor; // output per-vertex color

void main(){
    gl_Position = projection * modelview * vec4(position,  1.0);
    vec4 n = modelviewTransposedInverse * vec4(normal, 0.0);
    vec3 nn = normalize(n.xyz);
    forFragColor = vec4(1.0, 0.0, 0.0, 1.0);
    if(mode == 1) forFragColor = vec4(nn, 1.0); 
    if(mode == 2) forFragColor = vec4(normal, 1.0);
    if(mode == 3) forFragColor = gl_Position;
    if(mode == 4) forFragColor = vec4(position, 1.0);
    if(mode == 5) forFragColor = vec4(texcoord, 0.0, 1.0);
}
Fragment shader:
precision mediump float;
varying vec4 forFragColor; // interpolated fragment color

void main() {
  gl_FragColor = forFragColor;
}

Textures

2D textures can be accessed by defining a uniform variable of the type sampler2D. Texture parameters can be selected using name-value pairs in the comment after its definition. This includes magnification and minification filters and wrap parameters. The following table enlists the supported options:
Texture Parameter NamePossible Values
mag_filterNEAREST (default)
LINEAR
min_filterNEAREST
LINEAR
LINEAR_MIPMAP_NEAREST (default if supported by browser)
wrap_sCLAMP_TO_EDGE (default)
REPEAT
MIRRORED_REPEAT
wrap_tCLAMP_TO_EDGE (default)
REPEAT
MIRRORED_REPEAT

Here is an example:

Vertex shader:
attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

// uniform variables are exposed as node slots
uniform mat4 projection; // description="camera projection matrix"
uniform mat4 modelview; // description="modelview transformation"

// transposed inverse modelview transformation
// (not exposed as slot but computed from "modelview")
uniform mat4 modelviewTransposedInverse; 

varying vec2 tc; // output texture coordinate of vertex
varying vec3 fn; // output fragment normal of vertex

void main(){
  tc = texcoord;
  fn = normalize( vec3( modelviewTransposedInverse * vec4(normal, 0.0) ) );
  gl_Position = projection * modelview * vec4(position, 1.0);
}
Fragment shader:
precision mediump float;
varying vec2 tc; // texture coordinate of pixel (interpolated)
varying vec3 fn; // fragment normal of pixel (interpolated)

uniform sampler2D myTexture; // wrap_s="REPEAT" wrap_t="REPEAT"
uniform float texScale;  // description="scaling factor of texcoords" defaultval="2.0"
uniform int gsnMeshGroup; // not exposed as slot

void main() {
  vec3 normal = normalize(fn);
  vec4 texColor  = texture2D(myTexture, texScale * tc);
  vec4 col = vec4(0.0, 0.0, 0.0, 1.0);
  if(gsnMeshGroup == 0) col = vec4(1.0, 0.0, 0.0, 1.0);
  if(gsnMeshGroup == 1) col = vec4(0.0, 1.0, 0.0, 1.0);
  if(gsnMeshGroup == 2) col = vec4(0.0, 0.0, 1.0, 1.0);
  if(gsnMeshGroup == 3) col = vec4(1.0, 1.0, 0.0, 1.0);
  gl_FragColor = col * texColor;
}

Blinn-Phong Shading with a Directional Light

This is a shader that computes the output pixel color according to the Blinn-Phong shading model:

Vertex shader:
attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

// uniform variables are exposed as node slots
uniform mat4 projection; //description="camera projection matrix"
uniform mat4 modelview; // description="modelview transformation"

// transposed inverse modelview transformation
// (not exposed as slot but computed from "modelview")
uniform mat4 modelviewTransposedInverse; 

varying vec2 tc; // output texture coordinate of vertex
varying vec3 fn; // output fragment normal of vertex
varying vec3 vertPos; // output 3D position in camera coordinate system

void main(){
  tc = texcoord;
  fn = normalize( vec3( modelviewTransposedInverse * vec4(normal, 0.0) ) );
  vec4 vertPos4 = modelview * vec4(position, 1.0);
  vertPos = vec3(vertPos4) / vertPos4.w;
  gl_Position = projection * vertPos4;
}
Fragment shader:
precision mediump float; 

varying vec2 tc;  // texture coordinate of pixel (interpolated)
varying vec3 fn;  // fragment normal of pixel (interpolated)
varying vec3 vertPos; // fragment vertex position (interpolated)

uniform sampler2D diffuseTex; // description="diffuse texture"
uniform float ambientFactor; // description="ambientFactor" defaultval="0.1"
uniform vec4 specularColor; // description="specular color" defaultval="1.0, 1.0, 1.0, 1.0"
uniform float shininess; // description="specular shininess exponent" defaultval="20.0"
uniform vec4 lightColor; // description="color of light" defaultval="1.0, 1.0, 1.0, 1.0"
uniform vec3 lightDirection; // light direction

void main() {
  vec3 lightDir = normalize(lightDirection);
  vec3 normal = normalize(fn.xyz);
  float lambertian = max(dot(lightDir, normal), 0.0);
  float specular = 0.0;

  if(lambertian > 0.0) {
    vec3 viewDir = normalize(-vertPos);
    vec3 halfDir = normalize(lightDir + viewDir);
    float specDot = max(dot(halfDir, normal), 0.0);
    specular = pow(specDot, shininess);
  }

  vec4 diffuseColor = texture2D(diffuseTex, tc);
  gl_FragColor = ambientFactor * diffuseColor;
  gl_FragColor += lambertian * diffuseColor;
  gl_FragColor += specular * specularColor;
  gl_FragColor *= lightColor;
  gl_FragColor.a = 1.0;
}

Blinn-Phong Shading with a Point Light

This shader computes the output pixel color according to the Blinn-Phong shading model using a point light. Though the Shader plugin can render only a single mesh, it is possible to combine several meshes as mesh groups into a single mesh and treat the groups differently in the shader.

Vertex shader:
precision mediump float; 
precision mediump int; 

attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

// uniform variables are exposed as node slots
uniform mat4 cameraLookAt; //description="camera look at matrix"
uniform mat4 cameraProjection; //description="camera projection matrix"
uniform mat4 meshTransform1; // description="mesh transformation 1"
uniform mat4 meshTransform2; // description="mesh transformation 2"

// transposed inverse cameraLookAt transformation 
// (not exposed as slot but computed from "cameraLookAt")
uniform mat4 cameraLookAtTransposedInverse; 

// transposed inverse meshTransform transformation 
// (not exposed as slot but computed from "meshTransform")
uniform mat4 meshTransform1TransposedInverse; 
uniform mat4 meshTransform2TransposedInverse; 

varying vec2 tc; // output texture coordinate of vertex
varying vec3 fn; // output fragment normal of vertex
varying vec3 vertPos; // output 3D position in camera coordinate system

uniform int gsnMeshGroup;

void main(){
  tc = texcoord;
  // transformation of grid
  mat4 modelview = cameraLookAt;
  mat4 modelviewTransposedInverse = cameraLookAtTransposedInverse;
  if(gsnMeshGroup == 1) { // transformation of sphere
   modelview *= meshTransform1;
   modelviewTransposedInverse *= meshTransform1TransposedInverse;
  }
  if(gsnMeshGroup > 1) { // transformation of teapot groups
    modelview *= meshTransform2;
    modelviewTransposedInverse *= meshTransform2TransposedInverse;
  }
  fn = normalize( vec3( modelviewTransposedInverse * vec4(normal, 0.0) ) );
  vec4 vertPos4 = modelview * vec4(position, 1.0);
  vertPos = vec3(vertPos4) / vertPos4.w;
  gl_Position = cameraProjection * vertPos4;
}
Fragment shader:
precision mediump float; 
precision mediump int;

varying vec2 tc; // texture coordinate of pixel (interpolated)
varying vec3 fn; // fragment normal of pixel (interpolated)
varying vec3 vertPos; // fragment vertex position (interpolated)

uniform sampler2D diffuseTex; // description="diffuse texture"
uniform float ambientFactor; // description="ambient factor" defaultval="0.1"
uniform vec4 specularColor; // description="specular color" defaultval="1.0, 1.0, 1.0, 1.0"
uniform float shininess; // description="specular shininess exponent" defaultval="20.0"
uniform vec4 lightColor; // description="color of light" defaultval="1.0, 1.0, 1.0, 1.0"
uniform vec3 lightPosition; //description="light position in the camera coordinate system"
uniform float lightAttenuation; // description="quadratic light attenu." defaultval="1.0"

uniform int gsnMeshGroup;

void main() {
  if(gsnMeshGroup == 1) {
    // color of sphere
    gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
  } else {
    // color of teapot
    vec3 lightVec = lightPosition - vertPos;
    float d = length(lightVec); // distance to light
    float attenuation = 1.0 / (lightAttenuation * d * d);
    vec3 lightDir = normalize(lightVec);
    vec3 normal = normalize(fn.xyz);
    float lambertian = max(dot(lightDir, normal), 0.0);
    float specular = 0.0;
   
    if(lambertian > 0.0) {
      vec3 viewDir = normalize(-vertPos);
      vec3 halfDir = normalize(lightDir + viewDir);
      float specDot = max(dot(halfDir, normal), 0.0);
      specular = pow(specDot, shininess);
    }

    vec4 diffuseColor = texture2D(diffuseTex, tc);
    if(gsnMeshGroup == 0) {
      // color of grid
      diffuseColor = vec4(tc.x, tc.y, 0.0, 1.0);
    }
    
    gl_FragColor = ambientFactor * diffuseColor;
    gl_FragColor += lambertian * diffuseColor * attenuation;
    gl_FragColor += specular * specularColor * attenuation;
    gl_FragColor *= lightColor;
    gl_FragColor.a = 1.0;
  }
}

Image-based Lighting

This shader uses a spherical environment map to perform image-based lighting for a reflective surface. A spherical panorama is scaled down sufficiently such that a look up in the texture averages enough pixels to approximate the size of the solid angle of the diffuse and specular lobe in the spherical environment map. Because the solid angle of the diffuse component is typically much larger, the diffuse environment map is scaled down more than the environment map for the specular component. Unfortunately, the GSN composer does currently not support loading images with high dynamic range. Therefore, in this example, the environment map's intensity is increased at the position of the sun by some additional nodes.

Vertex shader:
precision mediump float; 
precision mediump int; 

attribute vec3 position; // input vertex position from mesh
attribute vec2 texcoord; // input vertex texture coordinate from mesh
attribute vec3 normal;   // input vertex normal from mesh

// uniform variables are exposed as node slots
uniform mat4 cameraLookAt; //description="camera look at matrix"
uniform mat4 cameraProjection; //description="camera projection matrix"
uniform mat4 meshTransform1; // description="mesh transformation 1"
uniform mat4 meshTransform2; // description="mesh transformation 2"

// transposed inverse meshTransform transformation 
// (not exposed as slot but computed from "meshTransform")
uniform mat4 meshTransform1TransposedInverse; 
uniform mat4 meshTransform2TransposedInverse; 

varying vec2 tc; // output texture coordinate of vertex
varying vec3 fn; // output fragment normal of vertex
varying vec3 vertPos; // output 3D position in camera coordinate system

uniform int gsnMeshGroup;

void main(){
  tc = texcoord;
  // transformation of grid
  mat4 modelview = cameraLookAt;
  mat4 modelTransposedInverse;
  if(gsnMeshGroup == 0) { // transformation of sphere
   modelview *= meshTransform1;
   modelTransposedInverse = meshTransform1TransposedInverse;
  }
  if(gsnMeshGroup > 0) { // transformation of teapot groups
    modelview *= meshTransform2;
    // normals are transformed into the global coordinate system and
    // not into the camera coordinate system as in the examples before
    modelTransposedInverse = meshTransform2TransposedInverse;
  }
  fn = normalize( vec3( modelTransposedInverse * vec4(normal, 0.0) ) );
  vec4 vertPos4 = modelview * vec4(position, 1.0);
  vertPos = vec3(vertPos4) / vertPos4.w;
  gl_Position = cameraProjection * vertPos4;
}
Fragment shader:
precision mediump float; 
precision mediump int;

#define M_PI 3.1415926535897932384626433832795

varying vec2 tc; // texture coordinate of pixel (interpolated)
varying vec3 fn; // fragment normal of pixel (interpolated)
varying vec3 vertPos; // fragment vertex position (interpolated)

uniform sampler2D diffuseTex; // description="diffuse texture"
uniform sampler2D envmapBackground; // min_filter="LINEAR" mag_filter="LINEAR"
uniform bool showBackground; // defaultval="true"
uniform sampler2D envmapDiffuse; // min_filter="LINEAR" mag_filter="LINEAR"
uniform sampler2D envmapSpecular; // min_filter="LINEAR" mag_filter="LINEAR"
uniform float diffuseMix; // description="weighting factor of diffuse" defaultval="0.5"
uniform float specularMix; // description="weighting factor of specular" defaultval="0.5"
uniform vec3 cameraPos; // description="camera position in global coordinate system"
uniform int gsnMeshGroup;

vec2 directionToSphericalEnvmap(vec3 dir) {
  float s = mod(1.0 / (2.0*M_PI) * atan(dir.y, dir.x), 1.0);
  float t = 1.0 / (M_PI) * acos(-dir.z);
  return vec2(s, t);
}
  
void main() {
  vec3 normal = normalize(fn.xyz);
  // Because normals are given in the global coordinate system
  // the position of the camera in the global coorindate system
  // must be passed to the shader as a uniform variable
  // to compute the view direction
  vec3 viewDir = normalize(vertPos - cameraPos);
  vec3 r = reflect(viewDir, normal);
  
  if(gsnMeshGroup == 0) {
    if(showBackground) {
      // color of envmap sphere
      gl_FragColor = texture2D(envmapBackground, tc);
    } else {
      discard;
    }
  } else {
    vec4 base = texture2D(diffuseTex, tc);
    vec4 diff = texture2D(envmapDiffuse, directionToSphericalEnvmap(normal));
    vec4 spec = texture2D(envmapSpecular, directionToSphericalEnvmap(r));

    vec4 color = mix(base, diff * base, diffuseMix);
    color = mix(color, spec * base + color, specularMix);

    gl_FragColor = color;
    gl_FragColor.a = 1.0;
  }
}

Advanced Options

Several advanced rendering options (such as disabling of the depth test, blending functions, or the backface-culling mode) can be changed by inserting an additional commented line somewhere in the shader code that start with the keyword "gsnShaderOptions". The options can be selected using name-value pairs in the line after this keyword. The following table enlists the supported values:
NamePossible Values
depth_testENABLE (default)
DISABLE
cull_faceDISABLE (default)
BACK
FRONT
FRONT_AND_BACK
blend_func"srcFactor, dstFactor"
where source and destination blending factors can be:
ZERO, ONE, SRC_COLOR, ONE_MINUS_SRC_COLOR, DST_COLOR, ONE_MINUS_DST_COLOR, SRC_ALPHA, ONE_MINUS_SRC_ALPHA, DST_ALPHA, ONE_MINUS_DST_ALPHA

Default is "ONE,ONE_MINUS_SRC_ALPHA"
blend_func_separate"rgbSrcFactor, rbgDstFactor, aSrcFactor, aDstFactor"
such that RGB- and Alpha-channel source and destination blending factors can be chosen separately. Possible values are the same as for "blend_func".

For example, the following line of shader code disables the depth test and uses (default) pre-multiplied alpha blending:

// gsnShaderOptions: depth_test="DISABLE" blend_func="ONE,ONE_MINUS_SRC_ALPHA"