jump to navigation

Game 2 for 1GAM: Tetrissing May 17, 2013

Posted by Andor Salga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

tetrissing

Click to play!
View the source

I’m officially releasing Tetrissing for the 1GAM challenge. Tetrissing an open source Tetris clone I wrote in Processing.

I began working on the game during Ludum Dare 26. There were a few developers hacking on LD26 at the Ryerson Engineering building, so I decided to join them. I was only able to stay for a few hours, but I managed to get the core mechanics done in that time.

After I left Ryserson, I did some research and found most of the Tetris clones online lacked some basic features and has almost no polish. I wanted to contribute something different than what was already available. So, that’s when I decided to make this one of my 1GAM games. I spent the next 2 weeks fixing bugs, adding features, audio, art and polishing the game.

I’m fairly happy with what I have so far. My clone doesn’t rely on annoying keyboard key repeats, and it still allows tapping the left or right arrow keys to move a piece 1 block. I added a ‘ghost’ piece feature and kickback feature, pausing, restarting, audio and art. There was nothing too difficult about all this, but it did require work. So, in retrospect I want to take on something a bit more challenging for my next 1GAM game.

Lessons Learned

One mistake I made when writing this was over complicating the audio code. I used Minim for the Processing version, but I had to write my own implementation for the Processing.js version. I decided to look into the Web Audio API. After fumbling around with it, I did eventually manage to get it to work, but then the sound didn’t work in Firefox. Realizing that I made a simple matter complex, I ended up scrapping the whole thing and resorting to use audio tags, which took very little effort to get working. The SoundManager I have for JavaScript is now much shorter, easier to understand, and still gets the job done.

Another issue I ran into was a bug in the Processing.js library. When using tint() to color my ghost pieces, Pjs would refuse to render one of the blocks that composed a Tetris piece. I dove into the tint() code and tried fixing it myself, but I didn’t get too far. After taking a break, I realized I didn’t really have the time to invest in the Pjs fix and also came up with a dead-simple work-around. Since only the first block wasn’t rendering, I would render that first ‘invisible’ block off screen, then re-render the same block onscreen the second time. Fixing the issue in Pjs would have been nice. But that wasn’t what my main goal was.

Lastly, I was reminded how much time it takes to polish a game. I completed the core mechanics of Tetrissing in a few hours, but it took another 2 weeks to polish it!

If you like my work, please star or fork my repository on Github. Also, please post any feedback, thanks!

Game 1 for 1GAM: Hello Hanoi! February 27, 2013

Posted by Andor Salga in 1GAM, Game Development, Objective-C.
add a comment

hello_hanoi

My first iPhone game is available for download on the App Store. I call it Hello Hanoi!. It’s my “Hello World” for iPhone game development via Towers of Hanoi. My motivation to code and released the game came from the 1GAM challenge. A year-long event that dares developers to release 1 game every month.

When planning the game, I wanted to keep the scope very tight. At the end of it, I wanted a polished, tightly scoped game over a feature-rich, unpolished game. I think I managed to achieve that, but for my next game, I’d like to try the opposite.

The Pixels

I had a difficult time getting a hold of artists, so I decided to just learn pixel art and make all the assets myself. To begin making the art, I tried a few applications: Gimp, Aseprite, and Pixen. Gimp had issues with leaving artifacts on the canvas. Aseprite had problems with cursor position and I felt the UI was awkward. Pixen kept crashing. It was a bit frustrating so I restarted and re-installed them all. I launched Pixen first, and it seemed to work, so I stuck with it.

The result of making all the art myself shifted the release date dramatically. I should have released at the end of January and it’s almost March. At the same time, I had a lot of fun learning pixel art and learning about art in general, such as attention to color, lighting, shadows, and mood.

One particular level was very tedious to create and I soon realized I could generate the art instead! So, I launched Processing and wrote a small sketch to create a series of city buildings. It doesn’t look as good compared to what I could have done by hand, but it was a lot faster to create with this method.

The Code

The code was straightforward, but I did have to learn a few iOS specifics. How do I write a plist to storage? How do I use the new storyboards? Like the art, it was at times frustrating, but in the end it was worth it.

One mistake I did make was over-generalizing. I had the idea that it would be neat to support n number of discs. I wrote all the code to handle the rendering and positioning of the discs, but then realized it didn’t fit into the game. Players would get bored before they reached that many discs, and the necessary horizontal scaling of the disc assets would break the art style. So, I ended up taking the code out. Next time, I’ll try to catch myself over-generalizing.

I had a fun time making my first 1GAM game and I look forward to trying something new for the next one!

Automating my WorkFlow with TexturePacker’s Command Line Options February 17, 2013

Posted by Andor Salga in BNOTIONS, Game Development, Shell Scripting, TexturePacker.
add a comment

At work I use TexturePacker quite a bit to generate sprite sheets for UIToolkit from within Unity. Each sprite sheet ends up corresponding with a layer in UIToolkit, so the sheets we use are typically named “layer0.png”, “layer02x.png”, “layer0.txt”, “layer02x.txt”, where the .txt files are the metadata files.

During development, I’ll be interrupted every now and then with new and improved assets that need to pushed up into the Unity application. Once I have received the asset, I must take a series of steps to actually use that asset in Unity.

I open the TexturePacker .TPS file and drag the asset in. The sprites get re-arranged then I would perform the following:

  • Set the sheet filename for use with retina mobile devices, adding “2x”
  • Set the width of the sheet to 2048
  • Set the height of the sheet to 2048
  • Set scaling to 1.0
  • Click publish

I would then need to do the same thing for non-retina devices. UIToolkit will automatically select the appropriate sized sprite sheet to use based on the “2x” extension.

  • Set the data filename for use with non-retina mobile devices, removing “2X”
  • Set the width of the sheet to 1024
  • Set the height of the sheet to 1024
  • Set scaling to 0.5
  • Click publish

Once TexturePacker creates the 4 files (layer0.png, layer0.json, layer02x.png, layer02x.json ), I would rename the .json files to .txt to keep Unity happy. This process would be done over and over again, until insanity.

Automating Things

This weekend I had some time to investigate a way to automate this tedious process. I wanted a solution such that after editing the .TPS file, I could simply run a script that would take care of the rest of the work. I began by looking into TexturePacker’s command line options. After some tinkering, I came up with a short script that reduces the number of click and edits that I need to make.

I placed the script within the directory that contains all our assets. However, since the output sheets go into a folder that only contain the sheets and data files, so I need to reference these paths relative from where the shell script lives. So, this would be a script for one layer for UIToolkit:

#!/bin/bash
TexturePacker --sheet ../Resources/layer0.png    --data ../Resources/layer0.txt    --scale 0.5 --width 1024 --height 1024 --format unity layer0.tps
TexturePacker --sheet ../Resources/layer02x.png  --data ../Resources/layer02x.txt  --scale 1.0 --width 2048 --height 2048 --format unity layer0.tps

Note that I can omit changing any of the options that are already set in the .TPS file such as padding, rotation, algorithm, etc. This helps keep the script short and sweet.

The options all correspond with the changes that I mentioned previously. One interesting thing to note is the –format option which prevents needing to rename the .json data file to .txt. You might ask why I didn’t just set this option in the TexturePacker GUI. The reason is, I had just learned about this option after looking over the command line help!

I had created a script that I could run through command line, but I wanted the script to be easier to use. If I had just edited the .TPS file, I would be in Finder, and being able to double-click the script would be nicer than opening a terminal to execute it. Running the script through Finder would simply return an error since the terminal would be in my home directory.

To fix this issue, I had to modify the script a bit more:

#!/bin/bash
DIR="$( cd "$( dirname "$0" )" && pwd )"
TexturePacker --sheet "$DIR"/../Resources/layer0.png    --data "$DIR"/../Resources/layer0.txt    --scale 0.5 --width 1024 --height 1024 --format unity  "$DIR"/layer0.tps
TexturePacker --sheet "$DIR"/../Resources/layer02x.png  --data "$DIR"/../Resources/layer02x.txt  --scale 1.0 --width 2048 --height 2048 --format unity  "$DIR"/layer0.tps

When the shell script runs, we get the directory name where the script lives, cd into that directory and call pwd and assign that value into our DIR variable. This part took a bit more time as I learned spaces on either side of the equals sign will confuse bash, so I had to leave those out.

Now, if a new asset is sent to me, I open the .TPS file, add the file and save. Then I can run this script by a simple double-click. Tada!

Next steps

Using this method, I need to create a shell script for every UIToolkit layer. This isn’t nearly as bad as it sounds since we typically only have 2-3 layers. But, what I’d like to do in the future is investigate AppleScript. AppleScript can help convert a shell script into an app that allows files to be dropped on that app. If I did this, I could drop the .TPS file onto app, then the the script could extract the filename and do the rest. This would prevent needing a script for every layer.

Sprite Sheet Guide Generator January 13, 2013

Posted by Andor Salga in Game Development, Pixel Art, Processing.js.
add a comment

sprite sheet guide generator
Click the image above to get the tool.

I began using Pixen to start making pixel art assets for a few games I’m developing for the 1 game a month challenge.

While pixelating away, I found myself creating a series of sprite sheets for bitmapped fonts. I created one here, then another, but by then I found myself running into the same problem: before I began drawing each glyph, I first had to make sure I had a nice grid to keep all the characters in line. Each font used a different number of pixels, so I had to start from scratch every time. You can imagine that counting rows and columns of pixels and drawing each line separating glyphs is extremely tedious. I needed something to eliminate this from my workflow.

I decided to create a decent tool that took away this painful process. What I needed was a sprite sheet guide generator, a tool that created an image of a grid based on these inputs:

  • Number of sprites per row
  • Number of sprites per column
  • Width of the sprite
  • Height of the sprite
  • Border width and color

I used Processing.js to create the tool and I found the results to be quite useful. After almost finishing the tool, I realized I could alternate the sprite background colours to help me even more when I’m drawing down at the pixel level, so I implemented that as well.

You can run the tool right here or you can click on the image at the start of this post.

New Year Habits December 31, 2012

Posted by Andor Salga in Habits, Personal Development.
1 comment so far

Last year I decided to begin exercising. Not for a few months or a couple years, but for the rest of my life. I made a resolution to do some form of physical or mental exercise for 15 minutes a day every day. 15 minutes may sound meager, but by making a small habit change, I could later on extend my time without too much difficulty. I managed to keep my commitment to bike, run, meditate, or do yoga every day.

I had some slips when I missed a few days while moving apartments and traveling, but this small habit change got me to start running, and I eventually ran my first half marathon (: So, it worked out well.

If you’re setting some New Year’s resolutions, here are some tips on forming your new habits:

Make it quantifiable

You must be able to measure your goal so it can serve as an indication that you are on the right track. Either you did it or you didn’t, there should be no ambiguity. Setting a number to your goal is the easiest way to prevent this ambiguity. My metric was 15 minutes. Either I spent those 15 minutes on myself or I didn’t. So figure out: How long? How many pages? How many phone calls? What time? Put a number on it so when you are done, you can tick it off.

Keep a log

‘Ticking it off’ is an important step in habit forming because it helps motivate you and, like I mentioned before, it shows your progress. Jerry Seinfeld has a productivity secret: Don’t Break the Chain. This is a great tool that I use to help me on working on my goals. Upon completing your task, tick it off your log to track your progress.

When you fail…Start over

Don’t be hard on yourself if/when you fail, simply start over. Last year I did some traveling, which threw my routine out of whack. I ended up missing some days of exercise. I could have beat myself up over this, but that would have been counter-productive. If you go back to your old habits, simply start over.

Do what works for you

Don’t set yourself up for failure by doing something you hate. Forming new habits can be difficult, so there’s no need to make it even more challenging. You know yourself best, so choose a task that is somewhat enjoyable. Personally, I love stationary bikes. It’s hands-free, I can listen to music, and I can enjoy the scenery while I pedal away. So, if you want to succeed in your new habits, make sure to do what works for you.

Good luck! (:

BitCam.Me December 25, 2012

Posted by Andor Salga in Open Source, Pixel Art, Processing.js.
add a comment

bitcam_me_asalga

Check this out: I created a WebRTC demo that pixelates your webcam video stream: BitCam.me.

I recently developed a healthy obsession with pixel art and I began making some doodles in my spare time. Soon after I started doing this, I wondered what it would be like to generate pixel art programmatically. So I fired up Processing and made a sketch that did just that. The sketch pixelized a PNG, taking the average pixel color of the nearest neighbor pixels.

After completing that sketch, I realized I could easily upgrade what I had written to use WebRTC instead of a static image. I thought it would be much more fun and engaging to use this demo if it was in real-time. I added the necessary JavaScript and I was pretty excited about it (:

I then found SuperPixelTime and saw it did something similar to what I had written. But unlike my demo, it had some nice options to change the color palette. I read the code and figured making those changes wouldn’t be difficult either and soon had my own controls for changing palettes.

I had a great time making the demo. Let me know what you think!

Enjoy!

Engage3D Hackathon Coming Soon! December 8, 2012

Posted by Andor Salga in Kinect, Open Source, point cloud, webgl.
add a comment

A month ago, Bill Brock and I pitched our idea to develop an open source 3D web-based videoconferencing system for the Mozilla Ignite Challenge over Google chat. Will Barkis from Mozilla recorded and moderated the conversation and then sent it off to a panel of judges. The pitch was to receive a slice of $85,000 that was being doled out to the winners of the Challenge.

After some anticipation, we got word that we were among the winners. We would receive $10,000 in funding to support the development of our prototype. Our funding will cover travel expenses, accommodations, the purchasing of additional hardware and the development of the application itself.

We will also take on two more developers and have a hackathon closer to the end of the month. Over the span of four days we will iterate on our original code and release something more substantial. The Company Lab in Chattanooga has agreed to provide us with a venue to hack and a place to plug into the network. Both Bill and I are extremely excited to get back to hacking on Engage3D and to get back to playing with the gig network.

We will keep you updated on our Engage3D progress, stay tuned!

Developing engage3D – Phase 1 October 25, 2012

Posted by Andor Salga in Kinect, Open Source, point cloud, webgl.
1 comment so far


A point cloud of Bill Brock rendered with WebGL

I am working with Bill Brock (a PhD student from Tennessee) to develop an open source 3D video conferencing system that we are calling engage3D. We are developing this application as part of Mozilla’s Ignite Challenge.

Check out the video!

During the past few days, Bill and I made some major breakthroughs in terms of functionality. Bill sent me Kinect depth and color data via a server he wrote. We then managed to render that data in the browser (on my side) using WebGL. We are pretty excited about this since we have been hacking away for quite some time!

There has been significant drawbacks to developing this application over commodity internet, I managed to download over 8Gb of data in only one day while experimenting with the code. Hopefully, this will soon be able to be ported to the GENI resources in Chattanooga, TN for further prototyping and testing.

Even though we are still limited to a conventional internet connection, we want to do some research into data compression. Also, we have been struggling with calibrating the Kinect. This is also something we hope to resolve soon.

Experimenting with Normal Mapping September 26, 2012

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
Tags: ,
add a comment


Click me!

** Update March 20 2014 **
The server where this sketch was being hosted went down. I recently made several performance improvements to the sketch and re-posted it on OpenProcessing.

Quick note about the demo above. I’m aware the performance on Firefox is abysmal and I know it’s wonky on Chrome. Fixes to come!

I’ve heard and seen the use of normal mapping many times, but I have never experimented with it myself, so I decided I should, just to learn something new. Normal mapping is a type of bump mapping. It is a way of simulating bumps on an object, usually in a 3D game. These bumps are simulated with the use of lights. To get a better sense of this technique, click on the image above to see a running demo. The example uses a 2D canvas and simulates Phong lighting.

So why use it and how does it work?

The great thing with normal mapping is that you can simulate vertex detail of a simplified object without providing the extra vertices. By only providing the normals and then lighting the object, it will seem like the object has more detail than it actually does. If we wanted to place the code we had in a 3D game, we would only need 4 vertices to define a quad (maybe it could be a wall), and along with the normal map, we could render some awesome Phong illumination.

So, how does it work? Think of what a bitmap is. It is just a 2D map of bits. Each pixel contains a color components making up a the entire graphic. A normal map is also a 2D map. What makes normal maps special is how their data is interpreted. Instead of each pixel holding a ‘color’ value, each pixel actually stores a vector that defines where the corresponding part in the color image is ‘facing’ also known as our normal vector.

These normals need to be somehow encoded into an image. This can be easily done since we have three floating point components (x,y,z) that need to be converted into three 8 or 16 bit color components (r,g,b). When I began playing with this stuff, I wanted to see what the data actually looked like. I first dumped out all the color values from the normal map and found the range of the data:

Red (X) ranges from 0 to 255
Green (Y) ranges from 0 to 255
Blue (Z) ranges from 127 to 255

Why is Z different? When I first looked at this, it seemed to me that each component needs to be subtracted by 127 so the values map to their corresponding negative number lines in a 3D coordinate system. However, Z will always point directly towards the viewer, never away. If you do a search for normal map images, you will see the images are blue in color. So it would make sense why the blue is pronounced. The normal is always pointing ‘out’ of the image. If it ranged from 0-255, subtracting 127 would result in a negative number which doesn’t make sense. So, after subtracting each by 127:

X -127 to 128
Y -127 to 128
Z 0 to 128

The way I picture this is I imagine that all the normals are contained in a translucent semi-sphere with the semi-sphere’s base lying on the XY-plane. But since the Z range is half of that of X and Y, it would appear more like a squashed semi-sphere. This tells us the vectors aren’t normalized. But that can be solved easily with normalize(). Once normalized, they can be used in our lighting calculations. So now that we have some theoretical idea of how this rendering technique works, let’s step through some code. I wrote a Processing sketch, but of course the technique can be used in other environments.

// Declare globals to avoid garbage collection

// colorImage is the original image the user wants to light
// targetImage will hold the result of blending the 
// colorImage with the lighting.
PImage colorImage, targetImage;

// normalMap holds our 2D array of normal vectors. 
// It will be the same dimensions as our colorImage 
// since the lighting is per-pixel.
PVector normalMap[][];

// shine will be used in specular reflection calculations
// The higher the shine value, the shinier our object will be
float shine = 40.0f;
float specCol[] = {255, 128, 50};

// rayOfLight will represent a vector from the current 
// pixel to the light source (cursor coords);
PVector rayOfLight = new PVector(0, 0, 0);
PVector view = new PVector(0, 0, 1);
PVector specRay = new PVector(0, 0, 0);
PVector reflection = new PVector(0, 0, 0);

// These will hold our calculated lighting values
// diffuse will be white, so we only need 1 value
// Specular is orange, so we need all three components
float finalDiffuse = 0;
float finalSpec[] = {0, 0, 0};

// nDotL = Normal dot Light. This is calculated once
// per pixel in the diffuse part of the algorithm, but we may
// want to reuse it if the user wants specular reflection
// Define it here to avoid calculating it twice per pixel
float nDotL;

void setup(){
  size(256, 256);

  // Create targetImage only once
  colorImage = loadImage("data/colorMap.jpg");
  targetImage = createImage(width, height, RGB);

  // Load the normals from the normalMap into a 2D array to 
  // avoid slow color lookups and clarify code
  PImage normalImage =  loadImage("data/normalMap.jpg");
  normalMap = new PVector[width][height];
  
  // i indexes into the 1D array of pixels in the normal map
  int i;
  
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      i = y * width + x;

      // Convert the RBG values to XYZ
      float r = red(normalImage.pixels[i]) - 127.0;
      float g = green(normalImage.pixels[i]) - 127.0;
      float b = blue(normalImage.pixels[i]) - 127.0;
      
      normalMap[x][y] = new PVector(r, g, b);
      
      // Normal needs to be normalized because Z
      // ranged from 127-255
      normalMap[x][y].normalize();
    }
  }
}

void draw(){
  // When the user is no longer holding down the mouse button, 
  // the specular highlights aren't used. So reset the values
  // every frame here and set them only if necessary
  finalSpec[0] = 0;
  finalSpec[1] = 0;
  finalSpec[2] = 0;
  
  // Per frame we iterate over every pixel. We are performing
  // per-pixel lighting.
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      
      // Simulate a point light which means we need to
      // calculate a ray of light for each pixel. This vector
      // will go from the light/cursor to the current pixel.
      // Don't use PVector.sub() because that's too slow.
      rayOfLight.x = x - mouseX;
      rayOfLight.y = y - mouseY;

      // We only have two dimensions with the mouse, so we
      // have to create third dimension ourselves.
      // Force the ray to point into 3D space down -Z. 
      rayOfLight.z = -150;
      
      // Normalize the ray it can be used in a dot product
      // operation to get a sensible values(-1 to 1)
      // The normal will point towards the viewer
      // The ray will be pointing into the image
      rayOfLight.normalize();
      
      // We now have a normalized vector from the light
      // source to the pixel. We need to figure out the
      // angle between this ray of light and the normal
      // to calculate how much the pixel should be lit.

      // Say the normal is [0,1,0] and the light is [0,-1,0]
      // The normal is pointing up and the ray, directly down.
      // In this case, the pixel should be fully 100% lit
      // The angle would be PI

      // If the ray was [0,-1,0] it would
      // not contribute light at all, 0% lit
      // The angle would be 0 radians

      // We can easily calculate the angle by using the
      // dot product and rearranging the formula.
      // Omitting  magnitudes since they are = 1
      // ray . normal = cos(angle)
      // angle = acos(ray . normal)

      // Taking the acos of the dot product returns
      // a value between 0 and PI, so we normalize
      // that and scale to 255 for the color amount     
      nDotL = rayOfLight.dot(normalMap[x][y]);
      finalDiffuse = acos(nDotL)/PI * 255.0;
      
      // Avoid more processing by only calculating
      // specular lighting if the users wants to do it.
      // It is fairly processor intensive.
      if(mousePressed){
        // The next 5 lines calculates the reflection vector
        // using Phong specular illumination. I've written
        // a detailed blog about how this works: 
        // https://asalga.wordpress.com/2012/09/23/understanding-vector-reflection-visually/ 
        // Also, when we have to perform vector subtraction
        // as part of calculating the reflection vector,
        // do it manually since calling sub() is slow.
        reflection = new PVector(normalMap[x][y].x,
                                 normalMap[x][y].y,
                                 normalMap[x][y].z);
        reflection.mult(2.0 * nDotL);
        reflection.x -= rayOfLight.x;
        reflection.y -= rayOfLight.y;
        reflection.z -= rayOfLight.z;
        
        // The view vector points down (0, 0, 1) that is,
        // directly towards the viewer. The dot product 
        // of two normalized vector returns a value from
        // (-1 to 1). However, none of the normal vectors
        // point away from the user, so we don't have to
        // deal with making sure the result of the dot product 
        // is negative and thus a negative specular intensity.
        
        // Raise the result of that dot product value to the
        // power of shine. The higher shine is, the shinier
        // the surface will appear.        
        float specIntensity = pow(reflection.dot(view),shine);
        
        finalSpec[0] = specIntensity * specCol[0];
        finalSpec[1] = specIntensity * specCol[1];
        finalSpec[2] = specIntensity * specCol[2];
      }
      
      // Now that the specular and diffuse lighting are
      // calculated, they need to be blended together
      // with the original image and placed in the
      // target image. Since blend() is too slow, 
      // perform our own blending operation for diffuse.
      targetImage.set(x,y, 
        color(finalSpec[0] + (finalDiffuse *   
                            red(colorImage.get(x,y)))/255.0,

              finalSpec[1] + (finalDiffuse * 
                          green(colorImage.get(x,y)))/255.0,

              finalSpec[2] + (finalDiffuse *  
                         blue(colorImage.get(x,y)))/255.0));
    }
  }
  
  // Draw the final image to the canvas.
  image(targetImage, 0,0);
}

Whew! Hope that was a fun read, Let me know what you think!

Understanding Vector Reflection Visually September 23, 2012

Posted by Andor Salga in Game Development, Math.
10 comments

I started experimenting with normal maps when I came to the subject of specular reflection. I quickly realized I didn’t understand how the vector reflection part of the algorithm worked. It prompted me to investigate exactly how all this magic was happening. Research online didn’t prove very helpful. Forums are littered with individuals throwing around the vector reflection formula with no explanation whatsoever. This forced me to step through the formula piecemeal until I could make sense of it. This blog post attempts to guide readers through how one vector can be reflected onto another via logic and geometry.

Vector reflection is used in many graphics and gaming applications. As I mentioned, it is an important part of normal mapping since it is used to calculate specular highlights. There are other applications other than lighting, but let’s use the lighting problem for illustration.

Let’s start with two vectors. We would know the normal of the plane we are reflecting off of along with a vector pointing to the light source. Both of these vectors are normalized.


L is a normalized vector pointing to our light source. N is our normal vector.

We are going to work in 2D, but the principle works in 3D as well. What we are trying to do is figure out: if a light in the direction L hits a surface with normal N, what would be the reflected vector? Keep in mind, all the vectors here are normalized.

I drew R here geometrically, but don’t assume we can simply negate the x component of L to get R. If the normal vector was not pointing directly up, it would not be that easy. Also, this diagram assumes we have a mirror-like reflecting surface. That is, the angle between N and L is equal to the angle between N and R. I drew R as a unit vector since we only really care about its direction.

So, right now, we have no idea how to get R.
R = ?

However, we can use some logic to figure it out. There are two vectors we can play with. If you look at N, you can see that if we scale it, we can create a meaningful vector, call it sN. Then we can add another vector from sN to R.

What we are doing here is exploiting the parallelogram law of vectors. Our parallelogram runs from the origin to L to sN to R back to the origin. What is interesting is that this new vector from sN to R has the opposite direction of L. It is -L!

Now our formula says that if we scale N by some amount s and subtract L, we get R.
R = sN - L

Okay, now we need to figure out how much to scale N. To do this, we need to introduce yet another vector from the origin to the center of the parallelogram.

Let’s call this C. If we multiply C by 2, we can get sN (since C is half of the diagonal of the parallelogram).
2C = sN

Replacing sN in our formula with 2C:
R = 2C - L

Figuring out C isn’t difficult, since we can use vector projection. If you are unsure about how vector projection works, watch this KhanAcademy video.

If we replace C with our projection work, the formula starts to look like something! It says we need to project L onto N (to get C), scale it by two then subtract L to get R!
R = 2((\frac{N \cdot L}{|N|^2})N) - L

However, this can be simplified since N is normalized and getting the magnitude of a normalized vector yields 1.
R = 2(N \cdot L)N - L

Woohoo! We just figured out vector reflection geometrically, fun!

I hope this has been of some help to anyone trying to figure out where the vector reflection formula comes from. It has been frustrating piecing it all together and challenging trying to explain it. Let me know if this post gave you any ‘Ah-ha’ moments (:

Follow

Get every new post delivered to your Inbox.