jump to navigation

Normal Mapping using PShaders in Processing.js April 4, 2015

Posted by Andor Salga in gfx, GLSL, JavaScript, Open Source, Processing.js.
add a comment

Try my normal mapping PShader Demo:

Last year I made a very simple normal map demo in Processing.js and I posted it on OpenProcessing. It was fun to write, but something that bothered me about it was that the performance was very slow. The reason for this was because it uses a 2D canvas–there’s no hardware acceleration.

Now, I have been working on adding PShader support into Processing.js on my spare time. So here and there i’ll make a few updates. After fixing a bug in my implementation recently, I had enough to port over my normal map demo to use shaders. So, instead of having the lighting calculations in the sketch code, I could have them in GLSL/Shader code. I figured this should increase the performance quite a bit.

Converting the demo from Processing/Java code to GLSL was pretty straightforward–except working out a couple of annoying bugs–I got the demo to resemble what I originally had a year ago, but now the performance is much, much, much better :) I’m no longer limited to a tiny 256×256 canvas and I can use the full client area of the browser. Even with specular lighting, it runs at a solid 60 fps. yay!

If you’re interested in the code, here it is. It’s also posted on github.

#ifdef GL_ES
precision mediump float;

uniform vec2 iResolution;
uniform vec3 iCursor;

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;

void main(){
	vec2 uv = vec2(gl_FragCoord.xy / 512.0);
	uv.y = 1.0 - uv.y;

	vec2 p = vec2(gl_FragCoord);
	float mx = p.x - iCursor.x;
	float my = p.y - (iResolution.y - iCursor.y);
	float mz = 500.0;

	vec3 rayOfLight = normalize(vec3(mx, my, mz));
	vec3 normal = vec3(texture2D(normalMap, uv)) - 0.5;
	normal = normalize(normal);

	float nDotL = max(0.0, dot(rayOfLight, normal));
	vec3 reflection = normal * (2.0 * (nDotL)) - rayOfLight;

	vec3 col = vec3(texture2D(diffuseMap, uv)) * nDotL;

	if(iCursor.z == 1.0){
		float specIntensity = max(0.0, dot(reflection, vec3(.0, .0, 1.)));
		float specRaised = pow(specIntensity, 20.0);
		vec3 specColor = specRaised * vec3(1.0, 0.5, 0.2);
		col += specColor;

	gl_FragColor = vec4(col, 1.0);

Wibbles! March 19, 2015

Posted by Andor Salga in 1GAM, Game Development, Games, JavaScript, Open Source.
add a comment


I wanted to learn the basics of require.js and pixi.js, so I thought it would be fun to create a small game to experiment with these libraries. I decided to make a clone of a game I used to play: Nibbles. It was a QBASIC game that I played on my 80386.

Getting started with require.js was pretty daunting, there’s a bunch of documentation, but I found more of it confusing. Examples online helped some. Experimenting to see what worked and what didn’t helped me the most. On the other hand, pixi.js was very, very similar to Three.js….So much so, that I found myself guessing the API and I was for the most part right. It’s a fun 2D WebGL rending library that has a canvas fallback. It was overkill for what I was working on, but it was still a good learning experience.

Gomba 0.2 November 24, 2014

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
add a comment


I’ve been busy nursing my cat back to health, so I missed blogging last Saturday :( He’s doing a bit better, so I’m trying to stay hopeful.

Today I did manage to find some time to catch up on my blogging, so here are the major changes on Gomba:

  • Fixed a major physics issue (running too quick & jumping was broken)
  • Added coinbox
  • Fixed kicking a sprite from a brick
  • Added render layers

Rendering Layers

The most significant change I added was rendering layers. This allows me to specify a layer for each gameobject. Clouds and background objects must exist on lower layers, then things like coins should be a bit higher, then the goombas, Mario and other sprites even higher. You can think of each layer as a transparent sheet high school teachers use for overhead projectors. Do they have digital projectors yet?? I can also change a gameobject layer at runtime so when a goomba is ‘kicked’, I can move it to the very top layer (closest to the user) so that it appears as if the sprite is being remove from the world. Rendering them under the bricks would look just strange.

I used a binary tree to internally manage the rendering of the layers. This was probably overkill and I could have done away with an array, dynamically resizing it as needed if a layer index was too high. Ah well. I plan to abstract the structure even further so the implementation is unknown to the scene. I also need to fix tunnelling issues and x-collision issues too…Maybe for next month.

Gomba 0.15 October 25, 2014

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
add a comment


Play demo

I’m releasing a 0.15 version of Gomba, a component-based Processing platform game. I’m trying to be consistent about releases, so that means making a release every 4 weeks. I didn’t get everything I wanted into this release, so it’s not quite a 0.2. In any event, here are some of the changes that did make it in:

– Added platforms!
– Added audio channels for sound manager
– Many of the same component type can now be added to a gameobject
– Added goombas & squashing functionality
– Added functionality to punch bricks
– Fixed requestAnimationFrame issue for smoother graphics

I’m excited that I now have a sprite that can actually jump on things. But adding this functionality also introduced a bunch of bugs I now have to address. I have a list of issues I’m going to be tackling for the next 4 weeks, which should be fun.

Gomba 0.1 September 20, 2014

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
add a comment

Play demo

I was reading the Processing book Nature of Code by Daniel Shiffman and I came up to a section dealing with physics. I hadn’t written many sketches that use physics calculations, so I figured it would be fun to implement a simple runner/platformer game that uses forces, acceleration, velocity, etc. in Processing.

I decided to use a component-based architecture and I found it surprisingly fun to create components and tack them on to game objects. So far, I only have a preliminary amount of functionality done and I still need to sort out most of the collision code, but progress is good.

This marks my 0.1 release. I still have quite a way to go, but it’s a start.  You can take a look at the code on github or play around with the demo

I got bunch of inspiration from Pomax. He’s already created a Processing.js game engine you can check out here

BTW “gomba” in Hungarian is mushroom :)

Understanding Raycasting Step-by-Step January 22, 2014

Posted by Andor Salga in C++, Game Development, gfx, Open Source, OpenFrameworks.
add a comment



For some time I’ve wanted to understand how raycasting works. Raycasting is a graphics technique for rendering 3D scenes and it was used in old video games such as Wolfenstein 3D and Doom. I recently had some time to investigate and learn how this was done. I read up on a few resources, digested the information, and here is my understanding of it, paraphrased.

The ideas here are not my own. The majority of the code are from the references listed at the bottom of this post. I wrote this primarily to test my own understanding of the techniques involved.

Raycasting vs. Raytracing

Raycasting is not the same as Raytracing. Raycasting casts width number of rays into a scene and it is a 2D problem and it is non-recursive. Raytracing on the other hand casts width*height number of rays into a 3D scene. These rays bounce off several objects before calculating a final color. It is much more involved and it isn’t typically used for real-time applications.


Having spent a few years playing with Processing and Processing.js, I felt it was time for me to move on to OpenFrameworks. I’ve always favored C++ over Java, so this was an excuse to make my first OpenFrameworks application. My first one! (:

Background Theory

The idea is we are drawing a very simple 3D world by casting viewportWidth number of rays into a scene. These rays are based on the players position, direction, and field of view. Our scene is primitive, constructed by a 2D array populated with integers which will represent different colored cells/blocks while 0 will represent empty space.

We iterate from the left side of the viewport to the right, creating a ray for every column of pixels. Once a ray hits the edge of a cell, we calculate its length. Based on how far away the edge is, we draw a shorter or longer single pixel vertical line centered in the viewport. This is how we will achieve foreshortening.

Since our implementation does not involve any 3D geometry, this technique can be implemented on anything that supports a 2D context. This includes HTML5 canvas and even TI-83 devices.

What really excites me about this method is that the complexity is not dependent on the number of objects in the level, but instead on the number of horizontal pixels in the viewport! Very cool!!

Initial Setup

Inside an empty OpenFrameworks template, you will have a ofApp.h header file. In this file we declare a few variables: pos, right, dir, FOV, and rot all define properties of our player. The width and height variables are aliases and worldMap defines our world.

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{
  ofVec2f pos;
  ofVec2f right;
  ofVec2f dir;
  float FOV;
  float rot;

  int width;
  int height;
  int worldMap[15][24] = {
  void setup();
  void update();
  void draw();
  void keyPressed(int key);
  void keyReleased(int key);

In the setup method of our ofApp.cpp implementation file we assign initial starting values to our player. The position is where the player will be relative to the top left of the array/world. We don’t assign the up and right vectors here since those are set in update() every frame.

void ofApp::setup(){
  pos.set(5, 5);
  FOV = 45;
  rot = PI;
  width = ofGetWidth();
  height = ofGetHeight();

Setting the Direction and Right Vectors

We need two crucial vectors, the player’s direction and right vector.

In update(), we calculate these vectors based on the scalar rot value. (The keyboard will later drive the rot value, thus rotating the view).

Once we have the direction, we can calculate the perpendicular vector using a perp operation. There is a method which performs this task for you, but I wanted to demonstrate how easy it is here. It just involves swapping the components and negating one.

If our rot value is initially PI, it means we end up looking towards the left inside the world. Since our world is defined by an array it means our y goes positively downward in our world coordinate system.

void ofApp::update(){
  dir.x = cos(rot);
  dir.y = sin(rot);
  right.x = -dir.y;
  right.y = dir.x;

Drawing a Background

We can now start getting into our render method.

To clear each frame we are going to draw 2 rectangles directly on top of the viewport. The top half of the viewport will be the sky and the bottom will be grass.

void ofApp::draw(){
  // Blue rectangle for the sky
  ofSetColor(64, 128, 255);
  ofRect(0, 0, width, height/2);
  // Green rectangle for the ground
  ofSetColor(0, 255, 0);
  ofRect(0, height/2, width, height);

Calculating the Base Length of the Viewing Triangle

The FOV determines the angle from which the rays shoot from the player’s position. The exact center ray will be our player’s direction. Perpendicular to this ray is a line which connects to the far left and far right rays. This all forms a little isosceles triangle. The apex being the player’s position. The base of the triangle is the camera line magnitude which we are trying to calculate. The purpose of getting this is that we need to know how far left and right the rays need to span. The greater the FOV, the wider the rays will span.

Using some simple trigonometry, we can get this length. Note that we divide by two since we’ll be generating the rays in two parts, left side of the direction vector and the right side.

  float camPlaneMag = tan( FOV / 2.0f * (PI / 180.0) );

Generating the Rays

Before we start the main render loop, we declare rayDir which will be re-used for each loop iteration.

  ofVec2f rayDir;

Each iteration of this loop will be responsible for rendering a single pixel vertical line in the viewport. From the far left side of the screen to the far right.

  for(int x = 0; x < width; x++){

For each vertical slice, we need to generate a ray casting out into the scene. We do this by mapping the screen coordinates [0 to width-1] to [-1 to 1]. When x is 0, our ray cast for intersection tests will be collinear with our direction vector.

currCamScale will drive the direction of the rays by taking the result of scaling the right vector.

    float currCamScale = (2.0f * x / float(width)) - 1.0f;

Here is where we generate our rays.

Our right vector is scaled depending on the base of our triangle viewing area. Then we scale it again depending on which slice we are currently rendering. If x = 0, currCamScale maps to -1 which means we are generating the farthest left ray.

Add the resultant vector to the direction vector and this will form our current ray shooting into our scene.

    rayDir.set(dir.x + (right.x * camPlaneMag) * currCamScale, 
               dir.y + (right.y * camPlaneMag) * currCamScale);

Calculating the Magnitude Between Edges

Each ray travels inside a 2D world defined by an array populated with integers. Most of the values are 0 representing empty space. Anything else denotes a wall which is 1×1 unit in dimensions. The integers then help define the ‘cell’ edges. These edges will be used to help us perform intersection tests with the rays.

What we are trying figure out is for each ray, when does it hit an edge? To answer that question we’ll need to figure out some things:

1. What is the magnitude from the players position traveling along the current ray to the nearest x edge (and y edge). Note I say magnitude because we will be working with SCALAR values. I got confused here trying to figure this part out because I kept thinking about vectors.

2. What is the magnitude from one x edge to the next x edge? (And y edge to the next y edge.) We aren’t trying to figure out the vertical/horizontal distance, that’s just a unit of 1. Instead, we need to calculate the hypotenuse of the triangle formed from the ray going from one edge to the other. Again, when calculating the magnitude for x, the horizontal base leg length of the triangle formed will be 1. And when calculating the magnitude for y, the vertical height of the triangle will be 1. Drawing this on paper helps.

Once we have these values we can start running the DDA algorithm. Selecting the nearest x or y edge, testing if that edge has a filled block associated with it, and if not, incrementing some values. We’ll have two scalar values ‘racing’ one another.

Okay, let’s answer the second question first. Based on the direction, what is the magnitude/hypotenuse length from one x edge to another? We know horizontally, the length is 1 unit. That means we need to scale our ray such that the x component will be 1.

What value multiplied by x will result in 1? The answer is the inverse of x! So if we multiply the x component of the ray by 1/x, we’ll need to do the same for y to maintain the ray direction. This gives us a new vector:

v = [ rayDir.x * 1 / rayDir.x , rayDir.y * 1 / rayDir.x]

We already know the result of the calculation for the first component, so we can simplify:

v = [ 1, rayDir.y / rayDir.x ]

Then get the magnitude using the Pythagorean theorem.

    float magBetweenXEdges = ofVec2f(1, rayDir.y * (1.0 / rayDir.x) ).length();
    float magBetweenYEdges = ofVec2f(rayDir.x * (1.0 / rayDir.y), 1).length();

Calculating the Starting Magnitude

For this we have to calculate the length from the player’s position to the nearest x and y edges. To do this, we get the relative position from within the cell then use that as a scale for the magnitude between the edges.

Here we keep track of which direction the ray is going by setting dirStepX which will be used to jump from cell to cell in the correct direction.

For x, we have a triangle with a base horizontal length of 1 and some magnitude for its hypotenuse which we recently figured out. Now, based on the relative position of the user within a cell, how long is the hypotenuse of this triangle?

If the x position of the player was 0.2:

magToXedge = (0 + 1 - 0.2) * magBetweenXedges
           = 0.8 * magBetweenXedges

This means we are 80% away from the closest x edge, thus we need 80% of the hypotenuse. The same method is used to calculate the starting position of the y edge.

    if(rayDir.x > 0){
      magToXedge = (worldIndexX + 1.0 - pos.x) * magBetweenXEdges;
      dirStepX = 1;
      magToXedge = (pos.x - worldIndexX) * magBetweenXEdges;
      dirStepX = -1;

    if(rayDir.y > 0){
      magToYedge = (worldIndexY + 1.0 - pos.y) * magBetweenYEdges;
      dirStepY = 1;
      magToYedge = (pos.y - worldIndexY) * magBetweenYEdges;
      dirStepY = -1;

Running the Search

Here x and y values ‘race’. If one length is less than the other, we increase the shortest, increment 1 unit in that direction and check again. We end the loop as soon as we find a non-empty cell edge.

Note, we keep track of not only the last cell we were working with, but also which edge (x or y) that we hit with sideHit.

    int sideHit;

      if(magToXedge < magToYedge){
        magToXedge += magBetweenXEdges;
        worldIndexX += dirStepX;
        sideHit = 0;
        magToYedge += magBetweenYEdges;
        worldIndexY += dirStepY;
        sideHit = 1;
    }while(worldMap[worldIndexX][worldIndexY] == 0);

Selecting a Color & Lighting

We kept track of the non-empty index we hit, so we can use this to index into the map and get the associated color for the cell.

    ofColor wallColor;
      case 1:wallColor = ofColor(255,255,255);break;
      case 2:wallColor = ofColor(0,255,255);break;
      case 3:wallColor = ofColor(255,0,255);break;

If all faces of the walls were a solid color, it would be difficult to determine edges and where the walls were in relation to each other. So to provide a very rudimentary form of lighting, any time the rays hit an x edge, we darken the current color.

    wallColor = sideHit == 0 ? wallColor : wallColor/2.0f;

Preventing Distortion

The farther the wall is from the player, the smaller the vertical slice needs to be on the screen. Keep in mind when viewing a wall/plane dead on, the length of the rays further out will be longer resulting in shorter ‘scanlines’. This isn’t desired since it will warp our representation of the world.

After I posted this I was e-mailed how this part works, here’s an image to help clarify things:raycast distortion fix

Essentially, we take the base of the ‘lower’ triangle and divide it by the base of the ‘upper’ triangle. This value is actually the same proportionally as the perpendicular to camera line to the direction vector.

    float wallDist;

    if(sideHit == 0){
      wallDist = fabs((worldIndexX - pos.x + (1.0 - dirStepX) / 2.0) / rayDir.x);
      wallDist = fabs((worldIndexY - pos.y + (1.0 - dirStepY) / 2.0) / rayDir.y);

Calculating Line Height

If a ray was 1 unit from the wall, we draw a line that fills the entire length of the screen.

To provide a proper FPS feel, we center the vertical lines in screen space along Y. The middle of the cells will be at eye height.

To calculate how long the vertical line needs to be on the screen, we divide the viewport height by the distance from the wall. So if the wall was 1 unit from the player, it will fill up the entire vertical slide.

At this point we need to center our line height in our viewport line. Draw two vertical lines down, the smaller one centered next to the larger one. To get the position of top of the smaller one relative to the larger one, you just have to divide both lines halfway horizontally then subtract from that the half that remained from the smaller line. This gives us the starting point for the line we have to draw.

    float lineHeight = height/wallDist;
    float startY = height/2.0 - lineHeight/2.0;

Rendering a Slice

We finish our render loop by drawing a single pixel width vertical line for this x scanline iteration and we center it in the viewport. You can always clamp the lines to the bottom of the screen and make the player feel like they are mouse in a maze (:

    ofLine(x, startY, x, startY + lineHeight);
  }// finish loop
}// finish draw()

Keyboard Control

I’m still learning OpenFrameworks, so my implementation for keyboard control works, but it’s a hack. I’m too embarrassed to post the code until I resolve the issue I’m dealing with. In the mean time, I leave the task of updating position and rotation of the player up to the reader.


[1] http://www.permadi.com/tutorial/raycast/index.html
[2] http://lodev.org/cgtutor/raycasting.html
[3] http://en.wikipedia.org/wiki/Ray_casting

Game 1 for 1GAM 2014 – Asteroids January 14, 2014

Posted by Andor Salga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment


Skip the blog and play Asteroids!

Back in November, I picked up a contract to develop Asteroids in Processing.js. After developing the game, I lost touch with my contractee and thus $150. Soon after, I went on vacation and when I returned, I decided to polish off what I had and place it as a 1GAM entry. I added some audio, gave it a more authentic look and feel, added more effects and the like. So, this is my official release for my first 2014 1GAM!

Implementing PShader.set() October 5, 2013

Posted by Andor Salga in JavaScript, Processing, Processing.js, PShader.
add a comment

I was in the process of writing ref tests for my implementation of PShader.set() in Processing.js, when I ran into a nasty problem. PShader.set() can take on a variety of types including single floats and integers to set uniform shader variables. For example, we can have the following:

pShader.set("i", 1);
pShader.set("f", 1.0);

If the second argument is an integer, we must call uniform1i on the WebGL context, otherwise uniform1f needs to be called. But in JavaScript, we can’t distinguish between 1.0 and 1. I briefly considered modifying the the interface for this method, but knew there was a better solution. No, the last thing I wanted was to change the interface. So I just thought about it until I came up with an interesting solution. I figured, why not call both uniform1i and uniform1f right after each other? What would happen? It turns out, it works! It seems one will always fail and the other will succeed, leaving us with the proper uniform set!

Developing Shaders in Vim August 7, 2013

Posted by Andor Salga in Vim.
add a comment

I’ve been recently writing shaders in Vim and found some fun shortcuts to help in development.

Frequently I’ll want both vertex and fragment shaders open in the same terminal window, but I’ll want Vim to have multiple windows. In the man pages, I found a simple command line switch to open both files, one on top of another. This is known as staked windows.

vim -o vert.glsl frag.glsl

But I like seeing the shaders side-by-side, especially if there are a bunch of varying variables, so use an uppercase “O” switch instead.

vim -O vert.glsl frag.glsl

Making it Pretty

I finally decided to add some color to vim, so I looked around for a syntax file for GLSL and found one at vim.org. Download and install it to get syntax highlighting.

So after starting vim, I like running these lastline commands to set the line numbers and get the nice GLSL syntax highlighting:

:set nu
:syntax on
:set syntax=glsl

Make sure there isn’t a space between “=” and “glsl”, otherwise you’ll get an “Unknown option” error.


Now that the windows are all set up, you can start navigating between them and editing your shaders. To manipulate the windows you use CTRL-W which enters the window command mode, then add an extra character for what you want. For example, to toggle the cursor between the windows you can use CTRL-W w. This involves holding down CTRL, pressing ‘w’ letting go of both keys and pressing ‘w’ again. To close the current window, you can use CTRL-W q in the same way. You can of course still use the lastline mode commands :wq, or :q!

If you ever need to split up the windows further you have a few options. You can use CTRL-W s to split a window horizontally or CTRL-W v to split vertically. You can also go to lastline mode and type:


Or if you want to split the current window vertically,


I found some of the more useful window command mode commands are these:
CTRL-W w – Go to the next window
CTRL-W q – Quit the current window
CTRL-W s – Split the current window horizontally
CTRL-W v – Split the current window vertically
CTRL-W n – Create a new empty window
CTRL-W = – Make all windows equal size

But better yet, start Vim and type :help windows which will give you more information than you’d ever want about windows!

Experimenting with Normal Mapping using PShaders May 19, 2013

Posted by Andor Salga in GLSL, Processing, PShader.


Just over half a year ago, I wrote a blog about Experimenting with Normal Mapping. I wrote a simple Processing sketch that demonstrated the technique and I also wrote a hacked up Processing.js sketch to squeeze out some extra few frames/sec on the browser side of things.

This long weekend, I found myself with some extra time to hack on something. I remember that several weeks ago, Processing 2 introduced PShaders, which at the time I found exciting, but I didn’t have a chance to look at them. So this weekend I decided to take a look into this new PShader object. I haven’t touched shaders in a while, so I brushed up on them by reading the PShader tutorial on the Processing page.

After my refresher, I got to hacking and re-wrote my normal mapping sketch. Here is my complete sketch along with vertex and fragment shaders:

The sketch:

PImage diffuseMap;
PImage normalMap;

PShape plane;

PShader normalMapShader;

void setup() {
  size(256, 256, P3D);
  diffuseMap = loadImage("crossColor.jpg");
  normalMap = loadImage("crossNormal.jpg");
  plane = createPlane(diffuseMap);
  normalMapShader = loadShader("texfrag.glsl", "texvert.glsl");
  normalMapShader.set("normalMap", normalMap);

void draw(){
  translate(width/2, height/2, 0);

void mouseMoved(){

void mouseDragged(){

void updateCursorCoords(){
  normalMapShader.set("mouseX", (float)mouseX);
  normalMapShader.set("mouseY", height - (float)mouseY);

void mousePressed(){
  normalMapShader.set("useSpecular", 1);

void mouseReleased(){
  normalMapShader.set("useSpecular", 0);

PShape createPlane(PImage tex) {
  PShape sh = createShape();
  sh.vertex( 1, -1, 0, 1, 0);
  sh.vertex( 1,  1, 0, 1, 1);    
  sh.vertex(-1,  1, 0, 0, 1);
  sh.vertex(-1, -1, 0, 0, 0);
  return sh;

The vertex shader:


uniform mat4 transform;
uniform mat4 texMatrix;

attribute vec4 vertex;
attribute vec2 texCoord;

varying vec4 vertTexCoord;

void main() {
  gl_Position = transform * vertex;
  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);

The fragment shader:

#ifdef GL_ES
precision mediump float;
precision mediump int;

#define PI 3.141592658

uniform sampler2D normalMap;
uniform sampler2D colorMap;

uniform int useSpecular;

uniform float mouseX;
uniform float mouseY;

varying vec4 vertTexCoord;

const vec3 view = vec3(0,0,1);
const float shine = 40.0;

void main() {
  // Convert the RGB values to XYZ
  vec4 normalColor  = texture2D(normalMap, vertTexCoord.st);
  vec3 normalVector = vec3(normalColor - vec4(0.5));
  normalVector = normalize(normalVector);

  vec3 rayOfLight = vec3(gl_FragCoord.x - mouseX, gl_FragCoord.y - mouseY, -150.0);
  rayOfLight = normalize(rayOfLight);

  float nDotL = dot(rayOfLight, normalVector);

  vec3 finalSpec = vec3(0);

  if(useSpecular == 1){
    vec3 reflection = normalVector;
    reflection = reflection * nDotL * 2.0;
    reflection -= rayOfLight;
    float specIntensity = pow( dot(reflection, view), shine);
    finalSpec = vec3(1.0, 0.5, 0.2) * specIntensity;

  float finalDiffuse = acos(nDotL)/PI;

  gl_FragColor = vec4(finalSpec + vec3(texture2D(colorMap, vertTexCoord.st) * finalDiffuse), 1.0);


I found using PShaders very exciting, since I could place all this computational work on the GPU rather than CPU. So I wondered about the performance vs my old sketch. I’m on a mac mini, and after running tests I found my original normal mapping sketch ran at 30FPS with diffuse lighting and it ran at 21FPS using diffuse and specular lighting. Using PShaders, I was able to render specular and diffuse lighting at a solid 60FPS. Keep in mind the first sketch is 2D and my new one is 3D, so I’m not sure if that comparison is fair.

No Demo? :(

Sadly, Processing no longer allows exporting to applets, so I can’t even post a demo running in Java. The perfect solution would be to implement the PShader in Processing.js, which is something I’m considering….


Get every new post delivered to your Inbox.