jump to navigation

Gomba 0.1 September 20, 2014

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
add a comment

Play demo

I was reading the Processing book Nature of Code by Daniel Shiffman and I came up to a section dealing with physics. I hadn’t written many sketches that use physics calculations, so I figured it would be fun to implement a simple runner/platformer game that uses forces, acceleration, velocity, etc. in Processing.

I decided to use a component-based architecture and I found it surprisingly fun to create components and tack them on to game objects. So far, I only have a preliminary amount of functionality done and I still need to sort out most of the collision code, but progress is good.

This marks my 0.1 release. I still have quite a way to go, but it’s a start.  You can take a look at the code on github or play around with the demo

I got bunch of inspiration from Pomax. He’s already created a Processing.js game engine you can check out here

BTW “gomba” in Hungarian is mushroom :)

Understanding Raycasting Step-by-Step January 22, 2014

Posted by Andor Salga in C++, Game Development, gfx, Open Source, OpenFrameworks.
add a comment

raycasting

Introduction

For some time I’ve wanted to understand how raycasting works. Raycasting is a graphics technique for rendering 3D scenes and it was used in old video games such as Wolfenstein 3D and Doom. I recently had some time to investigate and learn how this was done. I read up on a few resources, digested the information, and here is my understanding of it, paraphrased.

The ideas here are not my own. The majority of the code are from the references listed at the bottom of this post. I wrote this primarily to test my own understanding of the techniques involved.

Raycasting vs. Raytracing

Raycasting is not the same as Raytracing. Raycasting casts width number of rays into a scene and it is a 2D problem and it is non-recursive. Raytracing on the other hand casts width*height number of rays into a 3D scene. These rays bounce off several objects before calculating a final color. It is much more involved and it isn’t typically used for real-time applications.

Framework

Having spent a few years playing with Processing and Processing.js, I felt it was time for me to move on to OpenFrameworks. I’ve always favored C++ over Java, so this was an excuse to make my first OpenFrameworks application. My first one! (:

Background Theory

The idea is we are drawing a very simple 3D world by casting viewportWidth number of rays into a scene. These rays are based on the players position, direction, and field of view. Our scene is primitive, constructed by a 2D array populated with integers which will represent different colored cells/blocks while 0 will represent empty space.

We iterate from the left side of the viewport to the right, creating a ray for every column of pixels. Once a ray hits the edge of a cell, we calculate its length. Based on how far away the edge is, we draw a shorter or longer single pixel vertical line centered in the viewport. This is how we will achieve foreshortening.

Since our implementation does not involve any 3D geometry, this technique can be implemented on anything that supports a 2D context. This includes HTML5 canvas and even TI-83 devices.

What really excites me about this method is that the complexity is not dependent on the number of objects in the level, but instead on the number of horizontal pixels in the viewport! Very cool!!

Initial Setup

Inside an empty OpenFrameworks template, you will have a ofApp.h header file. In this file we declare a few variables: pos, right, dir, FOV, and rot all define properties of our player. The width and height variables are aliases and worldMap defines our world.

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{
  private:
  ofVec2f pos;
  ofVec2f right;
  ofVec2f dir;
  float FOV;
  float rot;

  int width;
  int height;
    
  int worldMap[15][24] = {
    {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},
    {1,0,0,0,0,0,3,3,3,0,0,0,0,0,3,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,3,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,3,0,3,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,3,3,3,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,3,0,3,0,3,0,3,0,3,0,3,0,3,0,3,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}
  };
    
  public:
  void setup();
  void update();
  void draw();
		
  void keyPressed(int key);
  void keyReleased(int key);
  
  ~ofApp();
};

In the setup method of our ofApp.cpp implementation file we assign initial starting values to our player. The position is where the player will be relative to the top left of the array/world. We don’t assign the up and right vectors here since those are set in update() every frame.

void ofApp::setup(){
  pos.set(5, 5);
  FOV = 45;
  rot = PI;
  
  width = ofGetWidth();
  height = ofGetHeight();
}

Setting the Direction and Right Vectors

We need two crucial vectors, the player’s direction and right vector.

In update(), we calculate these vectors based on the scalar rot value. (The keyboard will later drive the rot value, thus rotating the view).

Once we have the direction, we can calculate the perpendicular vector using a perp operation. There is a method which performs this task for you, but I wanted to demonstrate how easy it is here. It just involves swapping the components and negating one.

If our rot value is initially PI, it means we end up looking towards the left inside the world. Since our world is defined by an array it means our y goes positively downward in our world coordinate system.

void ofApp::update(){
  dir.x = cos(rot);
  dir.y = sin(rot);
  
  right.x = -dir.y;
  right.y = dir.x;
}

Drawing a Background

We can now start getting into our render method.

To clear each frame we are going to draw 2 rectangles directly on top of the viewport. The top half of the viewport will be the sky and the bottom will be grass.

void ofApp::draw(){
  // Blue rectangle for the sky
  ofSetColor(64, 128, 255);
  ofRect(0, 0, width, height/2);
  
  // Green rectangle for the ground
  ofSetColor(0, 255, 0);
  ofRect(0, height/2, width, height);

Calculating the Base Length of the Viewing Triangle

The FOV determines the angle from which the rays shoot from the player’s position. The exact center ray will be our player’s direction. Perpendicular to this ray is a line which connects to the far left and far right rays. This all forms a little isosceles triangle. The apex being the player’s position. The base of the triangle is the camera line magnitude which we are trying to calculate. The purpose of getting this is that we need to know how far left and right the rays need to span. The greater the FOV, the wider the rays will span.

Using some simple trigonometry, we can get this length. Note that we divide by two since we’ll be generating the rays in two parts, left side of the direction vector and the right side.

  float camPlaneMag = tan( FOV / 2.0f * (PI / 180.0) );

Generating the Rays

Before we start the main render loop, we declare rayDir which will be re-used for each loop iteration.

  ofVec2f rayDir;

Each iteration of this loop will be responsible for rendering a single pixel vertical line in the viewport. From the far left side of the screen to the far right.

  for(int x = 0; x < width; x++){

For each vertical slice, we need to generate a ray casting out into the scene. We do this by mapping the screen coordinates [0 to width-1] to [-1 to 1]. When x is 0, our ray cast for intersection tests will be collinear with our direction vector.

currCamScale will drive the direction of the rays by taking the result of scaling the right vector.

    float currCamScale = (2.0f * x / float(width)) - 1.0f;

Here is where we generate our rays.

Our right vector is scaled depending on the base of our triangle viewing area. Then we scale it again depending on which slice we are currently rendering. If x = 0, currCamScale maps to -1 which means we are generating the farthest left ray.

Add the resultant vector to the direction vector and this will form our current ray shooting into our scene.

    rayDir.set(dir.x + (right.x * camPlaneMag) * currCamScale, 
               dir.y + (right.y * camPlaneMag) * currCamScale);

Calculating the Magnitude Between Edges

Each ray travels inside a 2D world defined by an array populated with integers. Most of the values are 0 representing empty space. Anything else denotes a wall which is 1×1 unit in dimensions. The integers then help define the ‘cell’ edges. These edges will be used to help us perform intersection tests with the rays.

What we are trying figure out is for each ray, when does it hit an edge? To answer that question we’ll need to figure out some things:

1. What is the magnitude from the players position traveling along the current ray to the nearest x edge (and y edge). Note I say magnitude because we will be working with SCALAR values. I got confused here trying to figure this part out because I kept thinking about vectors.

2. What is the magnitude from one x edge to the next x edge? (And y edge to the next y edge.) We aren’t trying to figure out the vertical/horizontal distance, that’s just a unit of 1. Instead, we need to calculate the hypotenuse of the triangle formed from the ray going from one edge to the other. Again, when calculating the magnitude for x, the horizontal base leg length of the triangle formed will be 1. And when calculating the magnitude for y, the vertical height of the triangle will be 1. Drawing this on paper helps.

Once we have these values we can start running the DDA algorithm. Selecting the nearest x or y edge, testing if that edge has a filled block associated with it, and if not, incrementing some values. We’ll have two scalar values ‘racing’ one another.

Okay, let’s answer the second question first. Based on the direction, what is the magnitude/hypotenuse length from one x edge to another? We know horizontally, the length is 1 unit. That means we need to scale our ray such that the x component will be 1.

What value multiplied by x will result in 1? The answer is the inverse of x! So if we multiply the x component of the ray by 1/x, we’ll need to do the same for y to maintain the ray direction. This gives us a new vector:

v = [ rayDir.x * 1 / rayDir.x , rayDir.y * 1 / rayDir.x]

We already know the result of the calculation for the first component, so we can simplify:

v = [ 1, rayDir.y / rayDir.x ]

Then get the magnitude using the Pythagorean theorem.

    float magBetweenXEdges = ofVec2f(1, rayDir.y * (1.0 / rayDir.x) ).length();
    float magBetweenYEdges = ofVec2f(rayDir.x * (1.0 / rayDir.y), 1).length();

Calculating the Starting Magnitude

For this we have to calculate the length from the player’s position to the nearest x and y edges. To do this, we get the relative position from within the cell then use that as a scale for the magnitude between the edges.

Here we keep track of which direction the ray is going by setting dirStepX which will be used to jump from cell to cell in the correct direction.

For x, we have a triangle with a base horizontal length of 1 and some magnitude for its hypotenuse which we recently figured out. Now, based on the relative position of the user within a cell, how long is the hypotenuse of this triangle?

If the x position of the player was 0.2:

magToXedge = (0 + 1 - 0.2) * magBetweenXedges
           = 0.8 * magBetweenXedges

This means we are 80% away from the closest x edge, thus we need 80% of the hypotenuse. The same method is used to calculate the starting position of the y edge.

    if(rayDir.x > 0){
      magToXedge = (worldIndexX + 1.0 - pos.x) * magBetweenXEdges;
      dirStepX = 1;
    }
    else{
      magToXedge = (pos.x - worldIndexX) * magBetweenXEdges;
      dirStepX = -1;
    }

    if(rayDir.y > 0){
      magToYedge = (worldIndexY + 1.0 - pos.y) * magBetweenYEdges;
      dirStepY = 1;
    }
    else{
      magToYedge = (pos.y - worldIndexY) * magBetweenYEdges;
      dirStepY = -1;
    }

Running the Search

Here x and y values ‘race’. If one length is less than the other, we increase the shortest, increment 1 unit in that direction and check again. We end the loop as soon as we find a non-empty cell edge.

Note, we keep track of not only the last cell we were working with, but also which edge (x or y) that we hit with sideHit.

    int sideHit;

    do{
      if(magToXedge < magToYedge){
        magToXedge += magBetweenXEdges;
        worldIndexX += dirStepX;
        sideHit = 0;
      }
      else{
        magToYedge += magBetweenYEdges;
        worldIndexY += dirStepY;
        sideHit = 1;
      }
    }while(worldMap[worldIndexX][worldIndexY] == 0);

Selecting a Color & Lighting

We kept track of the non-empty index we hit, so we can use this to index into the map and get the associated color for the cell.

    ofColor wallColor;
    switch(worldMap[worldIndexX][worldIndexY]){
      case 1:wallColor = ofColor(255,255,255);break;
      case 2:wallColor = ofColor(0,255,255);break;
      case 3:wallColor = ofColor(255,0,255);break;
    }

If all faces of the walls were a solid color, it would be difficult to determine edges and where the walls were in relation to each other. So to provide a very rudimentary form of lighting, any time the rays hit an x edge, we darken the current color.

    wallColor = sideHit == 0 ? wallColor : wallColor/2.0f;
    ofSetColor(wallColor);

Preventing Distortion

The farther the wall is from the player, the smaller the vertical slice needs to be on the screen. Keep in mind when viewing a wall/plane dead on, the length of the rays further out will be longer resulting in shorter ‘scanlines’. This isn’t desired since it will warp our representation of the world.

***Update***
After I posted this I was e-mailed how this part works, here’s an image to help clarify things:raycast distortion fix

Essentially, we take the base of the ‘lower’ triangle and divide it by the base of the ‘upper’ triangle. This value is actually the same proportionally as the perpendicular to camera line to the direction vector.

    float wallDist;

    if(sideHit == 0){
      wallDist = fabs((worldIndexX - pos.x + (1.0 - dirStepX) / 2.0) / rayDir.x);
    }
    else{
      wallDist = fabs((worldIndexY - pos.y + (1.0 - dirStepY) / 2.0) / rayDir.y);
    }

Calculating Line Height

If a ray was 1 unit from the wall, we draw a line that fills the entire length of the screen.

To provide a proper FPS feel, we center the vertical lines in screen space along Y. The middle of the cells will be at eye height.

To calculate how long the vertical line needs to be on the screen, we divide the viewport height by the distance from the wall. So if the wall was 1 unit from the player, it will fill up the entire vertical slide.

At this point we need to center our line height in our viewport line. Draw two vertical lines down, the smaller one centered next to the larger one. To get the position of top of the smaller one relative to the larger one, you just have to divide both lines halfway horizontally then subtract from that the half that remained from the smaller line. This gives us the starting point for the line we have to draw.

    float lineHeight = height/wallDist;
    float startY = height/2.0 - lineHeight/2.0;

Rendering a Slice

We finish our render loop by drawing a single pixel width vertical line for this x scanline iteration and we center it in the viewport. You can always clamp the lines to the bottom of the screen and make the player feel like they are mouse in a maze (:

    ofLine(x, startY, x, startY + lineHeight);
  }// finish loop
}// finish draw()

Keyboard Control

I’m still learning OpenFrameworks, so my implementation for keyboard control works, but it’s a hack. I’m too embarrassed to post the code until I resolve the issue I’m dealing with. In the mean time, I leave the task of updating position and rotation of the player up to the reader.

References

[1] http://www.permadi.com/tutorial/raycast/index.html
[2] http://lodev.org/cgtutor/raycasting.html
[3] http://en.wikipedia.org/wiki/Ray_casting

Game 1 for 1GAM 2014 – Asteroids January 14, 2014

Posted by Andor Salga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

Asteroids

Skip the blog and play Asteroids!

Back in November, I picked up a contract to develop Asteroids in Processing.js. After developing the game, I lost touch with my contractee and thus $150. Soon after, I went on vacation and when I returned, I decided to polish off what I had and place it as a 1GAM entry. I added some audio, gave it a more authentic look and feel, added more effects and the like. So, this is my official release for my first 2014 1GAM!

Game 2 for 1GAM: Tetrissing May 17, 2013

Posted by Andor Salga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

tetrissing

Click to play!
View the source

I’m officially releasing Tetrissing for the 1GAM challenge. Tetrissing an open source Tetris clone I wrote in Processing.

I began working on the game during Ludum Dare 26. There were a few developers hacking on LD26 at the Ryerson Engineering building, so I decided to join them. I was only able to stay for a few hours, but I managed to get the core mechanics done in that time.

After I left Ryserson, I did some research and found most of the Tetris clones online lacked some basic features and has almost no polish. I wanted to contribute something different than what was already available. So, that’s when I decided to make this one of my 1GAM games. I spent the next 2 weeks fixing bugs, adding features, audio, art and polishing the game.

I’m fairly happy with what I have so far. My clone doesn’t rely on annoying keyboard key repeats, and it still allows tapping the left or right arrow keys to move a piece 1 block. I added a ‘ghost’ piece feature and kickback feature, pausing, restarting, audio and art. There was nothing too difficult about all this, but it did require work. So, in retrospect I want to take on something a bit more challenging for my next 1GAM game.

Lessons Learned

One mistake I made when writing this was over complicating the audio code. I used Minim for the Processing version, but I had to write my own implementation for the Processing.js version. I decided to look into the Web Audio API. After fumbling around with it, I did eventually manage to get it to work, but then the sound didn’t work in Firefox. Realizing that I made a simple matter complex, I ended up scrapping the whole thing and resorting to use audio tags, which took very little effort to get working. The SoundManager I have for JavaScript is now much shorter, easier to understand, and still gets the job done.

Another issue I ran into was a bug in the Processing.js library. When using tint() to color my ghost pieces, Pjs would refuse to render one of the blocks that composed a Tetris piece. I dove into the tint() code and tried fixing it myself, but I didn’t get too far. After taking a break, I realized I didn’t really have the time to invest in the Pjs fix and also came up with a dead-simple work-around. Since only the first block wasn’t rendering, I would render that first ‘invisible’ block off screen, then re-render the same block onscreen the second time. Fixing the issue in Pjs would have been nice. But that wasn’t what my main goal was.

Lastly, I was reminded how much time it takes to polish a game. I completed the core mechanics of Tetrissing in a few hours, but it took another 2 weeks to polish it!

If you like my work, please star or fork my repository on Github. Also, please post any feedback, thanks!

BitCam.Me December 25, 2012

Posted by Andor Salga in Open Source, Pixel Art, Processing.js.
add a comment

bitcam_me_asalga

Check this out: I created a WebRTC demo that pixelates your webcam video stream: BitCam.me.

I recently developed a healthy obsession with pixel art and I began making some doodles in my spare time. Soon after I started doing this, I wondered what it would be like to generate pixel art programmatically. So I fired up Processing and made a sketch that did just that. The sketch pixelized a PNG, taking the average pixel color of the nearest neighbor pixels.

After completing that sketch, I realized I could easily upgrade what I had written to use WebRTC instead of a static image. I thought it would be much more fun and engaging to use this demo if it was in real-time. I added the necessary JavaScript and I was pretty excited about it (:

I then found SuperPixelTime and saw it did something similar to what I had written. But unlike my demo, it had some nice options to change the color palette. I read the code and figured making those changes wouldn’t be difficult either and soon had my own controls for changing palettes.

I had a great time making the demo. Let me know what you think!

Enjoy!

Engage3D Hackathon Coming Soon! December 8, 2012

Posted by Andor Salga in Kinect, Open Source, point cloud, webgl.
add a comment

A month ago, Bill Brock and I pitched our idea to develop an open source 3D web-based videoconferencing system for the Mozilla Ignite Challenge over Google chat. Will Barkis from Mozilla recorded and moderated the conversation and then sent it off to a panel of judges. The pitch was to receive a slice of $85,000 that was being doled out to the winners of the Challenge.

After some anticipation, we got word that we were among the winners. We would receive $10,000 in funding to support the development of our prototype. Our funding will cover travel expenses, accommodations, the purchasing of additional hardware and the development of the application itself.

We will also take on two more developers and have a hackathon closer to the end of the month. Over the span of four days we will iterate on our original code and release something more substantial. The Company Lab in Chattanooga has agreed to provide us with a venue to hack and a place to plug into the network. Both Bill and I are extremely excited to get back to hacking on Engage3D and to get back to playing with the gig network.

We will keep you updated on our Engage3D progress, stay tuned!

Developing engage3D – Phase 1 October 25, 2012

Posted by Andor Salga in Kinect, Open Source, point cloud, webgl.
1 comment so far


A point cloud of Bill Brock rendered with WebGL

I am working with Bill Brock (a PhD student from Tennessee) to develop an open source 3D video conferencing system that we are calling engage3D. We are developing this application as part of Mozilla’s Ignite Challenge.

Check out the video!

During the past few days, Bill and I made some major breakthroughs in terms of functionality. Bill sent me Kinect depth and color data via a server he wrote. We then managed to render that data in the browser (on my side) using WebGL. We are pretty excited about this since we have been hacking away for quite some time!

There has been significant drawbacks to developing this application over commodity internet, I managed to download over 8Gb of data in only one day while experimenting with the code. Hopefully, this will soon be able to be ported to the GENI resources in Chattanooga, TN for further prototyping and testing.

Even though we are still limited to a conventional internet connection, we want to do some research into data compression. Also, we have been struggling with calibrating the Kinect. This is also something we hope to resolve soon.

Experimenting with Normal Mapping September 26, 2012

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
Tags: ,
add a comment


Click me!

** Update March 20 2014 **
The server where this sketch was being hosted went down. I recently made several performance improvements to the sketch and re-posted it on OpenProcessing.

Quick note about the demo above. I’m aware the performance on Firefox is abysmal and I know it’s wonky on Chrome. Fixes to come!

I’ve heard and seen the use of normal mapping many times, but I have never experimented with it myself, so I decided I should, just to learn something new. Normal mapping is a type of bump mapping. It is a way of simulating bumps on an object, usually in a 3D game. These bumps are simulated with the use of lights. To get a better sense of this technique, click on the image above to see a running demo. The example uses a 2D canvas and simulates Phong lighting.

So why use it and how does it work?

The great thing with normal mapping is that you can simulate vertex detail of a simplified object without providing the extra vertices. By only providing the normals and then lighting the object, it will seem like the object has more detail than it actually does. If we wanted to place the code we had in a 3D game, we would only need 4 vertices to define a quad (maybe it could be a wall), and along with the normal map, we could render some awesome Phong illumination.

So, how does it work? Think of what a bitmap is. It is just a 2D map of bits. Each pixel contains a color components making up a the entire graphic. A normal map is also a 2D map. What makes normal maps special is how their data is interpreted. Instead of each pixel holding a ‘color’ value, each pixel actually stores a vector that defines where the corresponding part in the color image is ‘facing’ also known as our normal vector.

These normals need to be somehow encoded into an image. This can be easily done since we have three floating point components (x,y,z) that need to be converted into three 8 or 16 bit color components (r,g,b). When I began playing with this stuff, I wanted to see what the data actually looked like. I first dumped out all the color values from the normal map and found the range of the data:

Red (X) ranges from 0 to 255
Green (Y) ranges from 0 to 255
Blue (Z) ranges from 127 to 255

Why is Z different? When I first looked at this, it seemed to me that each component needs to be subtracted by 127 so the values map to their corresponding negative number lines in a 3D coordinate system. However, Z will always point directly towards the viewer, never away. If you do a search for normal map images, you will see the images are blue in color. So it would make sense why the blue is pronounced. The normal is always pointing ‘out’ of the image. If it ranged from 0-255, subtracting 127 would result in a negative number which doesn’t make sense. So, after subtracting each by 127:

X -127 to 128
Y -127 to 128
Z 0 to 128

The way I picture this is I imagine that all the normals are contained in a translucent semi-sphere with the semi-sphere’s base lying on the XY-plane. But since the Z range is half of that of X and Y, it would appear more like a squashed semi-sphere. This tells us the vectors aren’t normalized. But that can be solved easily with normalize(). Once normalized, they can be used in our lighting calculations. So now that we have some theoretical idea of how this rendering technique works, let’s step through some code. I wrote a Processing sketch, but of course the technique can be used in other environments.

// Declare globals to avoid garbage collection

// colorImage is the original image the user wants to light
// targetImage will hold the result of blending the 
// colorImage with the lighting.
PImage colorImage, targetImage;

// normalMap holds our 2D array of normal vectors. 
// It will be the same dimensions as our colorImage 
// since the lighting is per-pixel.
PVector normalMap[][];

// shine will be used in specular reflection calculations
// The higher the shine value, the shinier our object will be
float shine = 40.0f;
float specCol[] = {255, 128, 50};

// rayOfLight will represent a vector from the current 
// pixel to the light source (cursor coords);
PVector rayOfLight = new PVector(0, 0, 0);
PVector view = new PVector(0, 0, 1);
PVector specRay = new PVector(0, 0, 0);
PVector reflection = new PVector(0, 0, 0);

// These will hold our calculated lighting values
// diffuse will be white, so we only need 1 value
// Specular is orange, so we need all three components
float finalDiffuse = 0;
float finalSpec[] = {0, 0, 0};

// nDotL = Normal dot Light. This is calculated once
// per pixel in the diffuse part of the algorithm, but we may
// want to reuse it if the user wants specular reflection
// Define it here to avoid calculating it twice per pixel
float nDotL;

void setup(){
  size(256, 256);

  // Create targetImage only once
  colorImage = loadImage("data/colorMap.jpg");
  targetImage = createImage(width, height, RGB);

  // Load the normals from the normalMap into a 2D array to 
  // avoid slow color lookups and clarify code
  PImage normalImage =  loadImage("data/normalMap.jpg");
  normalMap = new PVector[width][height];
  
  // i indexes into the 1D array of pixels in the normal map
  int i;
  
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      i = y * width + x;

      // Convert the RBG values to XYZ
      float r = red(normalImage.pixels[i]) - 127.0;
      float g = green(normalImage.pixels[i]) - 127.0;
      float b = blue(normalImage.pixels[i]) - 127.0;
      
      normalMap[x][y] = new PVector(r, g, b);
      
      // Normal needs to be normalized because Z
      // ranged from 127-255
      normalMap[x][y].normalize();
    }
  }
}

void draw(){
  // When the user is no longer holding down the mouse button, 
  // the specular highlights aren't used. So reset the values
  // every frame here and set them only if necessary
  finalSpec[0] = 0;
  finalSpec[1] = 0;
  finalSpec[2] = 0;
  
  // Per frame we iterate over every pixel. We are performing
  // per-pixel lighting.
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      
      // Simulate a point light which means we need to
      // calculate a ray of light for each pixel. This vector
      // will go from the light/cursor to the current pixel.
      // Don't use PVector.sub() because that's too slow.
      rayOfLight.x = x - mouseX;
      rayOfLight.y = y - mouseY;

      // We only have two dimensions with the mouse, so we
      // have to create third dimension ourselves.
      // Force the ray to point into 3D space down -Z. 
      rayOfLight.z = -150;
      
      // Normalize the ray it can be used in a dot product
      // operation to get a sensible values(-1 to 1)
      // The normal will point towards the viewer
      // The ray will be pointing into the image
      rayOfLight.normalize();
      
      // We now have a normalized vector from the light
      // source to the pixel. We need to figure out the
      // angle between this ray of light and the normal
      // to calculate how much the pixel should be lit.

      // Say the normal is [0,1,0] and the light is [0,-1,0]
      // The normal is pointing up and the ray, directly down.
      // In this case, the pixel should be fully 100% lit
      // The angle would be PI

      // If the ray was [0,-1,0] it would
      // not contribute light at all, 0% lit
      // The angle would be 0 radians

      // We can easily calculate the angle by using the
      // dot product and rearranging the formula.
      // Omitting  magnitudes since they are = 1
      // ray . normal = cos(angle)
      // angle = acos(ray . normal)

      // Taking the acos of the dot product returns
      // a value between 0 and PI, so we normalize
      // that and scale to 255 for the color amount     
      nDotL = rayOfLight.dot(normalMap[x][y]);
      finalDiffuse = acos(nDotL)/PI * 255.0;
      
      // Avoid more processing by only calculating
      // specular lighting if the users wants to do it.
      // It is fairly processor intensive.
      if(mousePressed){
        // The next 5 lines calculates the reflection vector
        // using Phong specular illumination. I've written
        // a detailed blog about how this works: 
        // http://asalga.wordpress.com/2012/09/23/understanding-vector-reflection-visually/ 
        // Also, when we have to perform vector subtraction
        // as part of calculating the reflection vector,
        // do it manually since calling sub() is slow.
        reflection = new PVector(normalMap[x][y].x,
                                 normalMap[x][y].y,
                                 normalMap[x][y].z);
        reflection.mult(2.0 * nDotL);
        reflection.x -= rayOfLight.x;
        reflection.y -= rayOfLight.y;
        reflection.z -= rayOfLight.z;
        
        // The view vector points down (0, 0, 1) that is,
        // directly towards the viewer. The dot product 
        // of two normalized vector returns a value from
        // (-1 to 1). However, none of the normal vectors
        // point away from the user, so we don't have to
        // deal with making sure the result of the dot product 
        // is negative and thus a negative specular intensity.
        
        // Raise the result of that dot product value to the
        // power of shine. The higher shine is, the shinier
        // the surface will appear.        
        float specIntensity = pow(reflection.dot(view),shine);
        
        finalSpec[0] = specIntensity * specCol[0];
        finalSpec[1] = specIntensity * specCol[1];
        finalSpec[2] = specIntensity * specCol[2];
      }
      
      // Now that the specular and diffuse lighting are
      // calculated, they need to be blended together
      // with the original image and placed in the
      // target image. Since blend() is too slow, 
      // perform our own blending operation for diffuse.
      targetImage.set(x,y, 
        color(finalSpec[0] + (finalDiffuse *   
                            red(colorImage.get(x,y)))/255.0,

              finalSpec[1] + (finalDiffuse * 
                          green(colorImage.get(x,y)))/255.0,

              finalSpec[2] + (finalDiffuse *  
                         blue(colorImage.get(x,y)))/255.0));
    }
  }
  
  // Draw the final image to the canvas.
  image(targetImage, 0,0);
}

Whew! Hope that was a fun read, Let me know what you think!

Porting Crayon Physics to the Web June 12, 2012

Posted by Andor Salga in Game Development, Games, Open Source, Processing.js.
add a comment


click that: ^^

It’s been a while since I gave my blog some love, so here goes.

Many semesters ago, I tried to make an HTML5 port of Crayon Physics Deluxe using Processing.js and Box2DJS. I got the rendering right, but I couldn’t get the physics to work correctly. My demo was limited to bounding box collisions.

I recently tried again to see if I could make this thing work. When I hacked together a demo of a ball rolling across the canvas, which was composed of several shapes, I became super-giddy. I knew it was indeed possible to port Crayon Physics to the Web. So I kept hacking on it, and I now have something semi-decent. There are of course a bunch of bugs to work out, but I’m excited to see this project’s potential.

No Comply Game Prototype with Processing January 25, 2012

Posted by Andor Salga in Game Development, Gladius, Open Source, Processing.
1 comment so far

Last week I met with Dave Humphrey and Jon Buckley to discuss creating a game for the HTML5 games week which will take place at Mozilla’s Toronto office in mid-February. The main purpose of this is to drive the development and showcase the Gladius game engine.

At the end of the meeting we decided it would make the most sense to upgrade the No Comply demo by adding interaction—making a small game. Since we only have a few weeks, we decided to keep it simple. Keeping that in mind I created a game specification.

Play Analysis

After putting together the spec, I started on a prototype. I began the prototype by playing and analyzing the game mechanics of Mortal Kombat I. The game is simple enough to emulate given the time frame. I took some notes and concluded characters always appear to be in discrete states. If characters are in a particular state, they may or may not be able to transition into another state. Initially a player is idle. From there, they can transition into a jumping. Once jumping they cannot block, but are allowed to punch an kick. Having understood the basic rules, I drew a diagram to visually represent some states.

The diagram led me to believe I could use the state pattern to keep the game extensible. We can get a basic game working and the design should lend itself to later (painless) modification.

Prototyping the Prototype

Having worked with Processing for some time I knew it would be an ideal tool to construct a prototype of the game. Processing enables developers to get graphical interactive software up and running quickly and easily. The structure is elegant and the language inherits Java’s simple object-oriented syntax.

I began creating a Processing sketch by adding the necessary classes for player states such as moving, jumping and punching. I hooked in keyboard input to allow changing states and rendered text to indicate the current state. I was able to fix most of the bugs by playing around with different key combinations and just looking at the text output.

I eventually added graphical content, but intentionally kept it crappy. I resisted the urge to create attractive assets since they would be replaced with the No Comply sprites anyway. Neglecting the aesthetics was a challenge—if you cringe at the graphics, I succeeded.

Okay, still want to play it?

Next Steps

I omitted collision detection and some simple game logic from the prototype, but I believe it demonstrates we can create a structure good enough for a fighting game for Gladius. Today I’ll be switching gears and starting to hack on Gladius to get COLLADA importing working which we’ll discuss in a Paladin meeting early next week.

Follow

Get every new post delivered to your Inbox.