jump to navigation

Understanding Raycasting Step-by-Step January 22, 2014

Posted by Andor Salga in C++, Game Development, gfx, Open Source, OpenFrameworks.
add a comment

raycasting

Introduction

For some time I’ve wanted to understand how raycasting works. Raycasting is a graphics technique for rendering 3D scenes and it was used in old video games such as Wolfenstein 3D and Doom. I recently had some time to investigate and learn how this was done. I read up on a few resources, digested the information, and here is my understanding of it, paraphrased.

The ideas here are not my own. The majority of the code are from the references listed at the bottom of this post. I wrote this primarily to test my own understanding of the techniques involved.

Raycasting vs. Raytracing

Raycasting is not the same as Raytracing. Raycasting casts width number of rays into a scene and it is a 2D problem and it is non-recursive. Raytracing on the other hand casts width*height number of rays into a 3D scene. These rays bounce off several objects before calculating a final color. It is much more involved and it isn’t typically used for real-time applications.

Framework

Having spent a few years playing with Processing and Processing.js, I felt it was time for me to move on to OpenFrameworks. I’ve always favored C++ over Java, so this was an excuse to make my first OpenFrameworks application. My first one! (:

Background Theory

The idea is we are drawing a very simple 3D world by casting viewportWidth number of rays into a scene. These rays are based on the players position, direction, and field of view. Our scene is primitive, constructed by a 2D array populated with integers which will represent different colored cells/blocks while 0 will represent empty space.

We iterate from the left side of the viewport to the right, creating a ray for every column of pixels. Once a ray hits the edge of a cell, we calculate its length. Based on how far away the edge is, we draw a shorter or longer single pixel vertical line centered in the viewport. This is how we will achieve foreshortening.

Since our implementation does not involve any 3D geometry, this technique can be implemented on anything that supports a 2D context. This includes HTML5 canvas and even TI-83 devices.

What really excites me about this method is that the complexity is not dependent on the number of objects in the level, but instead on the number of horizontal pixels in the viewport! Very cool!!

Initial Setup

Inside an empty OpenFrameworks template, you will have a ofApp.h header file. In this file we declare a few variables: pos, right, dir, FOV, and rot all define properties of our player. The width and height variables are aliases and worldMap defines our world.

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{
  private:
  ofVec2f pos;
  ofVec2f right;
  ofVec2f dir;
  float FOV;
  float rot;

  int width;
  int height;
    
  int worldMap[15][24] = {
    {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},
    {1,0,0,0,0,0,3,3,3,0,0,0,0,0,3,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,3,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,1},
    {1,0,0,0,3,0,0,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,3,0,3,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,3,3,3,0,0,0,2,2,0,0,0,2,2,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,0,0,0,0,0,0,3,0,3,0,3,0,3,0,3,0,3,0,3,0,3,0,1},
    {1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1},
    {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}
  };
    
  public:
  void setup();
  void update();
  void draw();
		
  void keyPressed(int key);
  void keyReleased(int key);
  
  ~ofApp();
};

In the setup method of our ofApp.cpp implementation file we assign initial starting values to our player. The position is where the player will be relative to the top left of the array/world. We don’t assign the up and right vectors here since those are set in update() every frame.

void ofApp::setup(){
  pos.set(5, 5);
  FOV = 45;
  rot = PI;
  
  width = ofGetWidth();
  height = ofGetHeight();
}

Setting the Direction and Right Vectors

We need two crucial vectors, the player’s direction and right vector.

In update(), we calculate these vectors based on the scalar rot value. (The keyboard will later drive the rot value, thus rotating the view).

Once we have the direction, we can calculate the perpendicular vector using a perp operation. There is a method which performs this task for you, but I wanted to demonstrate how easy it is here. It just involves swapping the components and negating one.

If our rot value is initially PI, it means we end up looking towards the left inside the world. Since our world is defined by an array it means our y goes positively downward in our world coordinate system.

void ofApp::update(){
  dir.x = cos(rot);
  dir.y = sin(rot);
  
  right.x = -dir.y;
  right.y = dir.x;
}

Drawing a Background

We can now start getting into our render method.

To clear each frame we are going to draw 2 rectangles directly on top of the viewport. The top half of the viewport will be the sky and the bottom will be grass.

void ofApp::draw(){
  // Blue rectangle for the sky
  ofSetColor(64, 128, 255);
  ofRect(0, 0, width, height/2);
  
  // Green rectangle for the ground
  ofSetColor(0, 255, 0);
  ofRect(0, height/2, width, height);

Calculating the Base Length of the Viewing Triangle

The FOV determines the angle from which the rays shoot from the player’s position. The exact center ray will be our player’s direction. Perpendicular to this ray is a line which connects to the far left and far right rays. This all forms a little isosceles triangle. The apex being the player’s position. The base of the triangle is the camera line magnitude which we are trying to calculate. The purpose of getting this is that we need to know how far left and right the rays need to span. The greater the FOV, the wider the rays will span.

Using some simple trigonometry, we can get this length. Note that we divide by two since we’ll be generating the rays in two parts, left side of the direction vector and the right side.

  float camPlaneMag = tan( FOV / 2.0f * (PI / 180.0) );

Generating the Rays

Before we start the main render loop, we declare rayDir which will be re-used for each loop iteration.

  ofVec2f rayDir;

Each iteration of this loop will be responsible for rendering a single pixel vertical line in the viewport. From the far left side of the screen to the far right.

  for(int x = 0; x < width; x++){

For each vertical slice, we need to generate a ray casting out into the scene. We do this by mapping the screen coordinates [0 to width-1] to [-1 to 1]. When x is 0, our ray cast for intersection tests will be collinear with our direction vector.

currCamScale will drive the direction of the rays by taking the result of scaling the right vector.

    float currCamScale = (2.0f * x / float(width)) - 1.0f;

Here is where we generate our rays.

Our right vector is scaled depending on the base of our triangle viewing area. Then we scale it again depending on which slice we are currently rendering. If x = 0, currCamScale maps to -1 which means we are generating the farthest left ray.

Add the resultant vector to the direction vector and this will form our current ray shooting into our scene.

    rayDir.set(dir.x + (right.x * camPlaneMag) * currCamScale, 
               dir.y + (right.y * camPlaneMag) * currCamScale);

Calculating the Magnitude Between Edges

Each ray travels inside a 2D world defined by an array populated with integers. Most of the values are 0 representing empty space. Anything else denotes a wall which is 1×1 unit in dimensions. The integers then help define the ‘cell’ edges. These edges will be used to help us perform intersection tests with the rays.

What we are trying figure out is for each ray, when does it hit an edge? To answer that question we’ll need to figure out some things:

1. What is the magnitude from the players position traveling along the current ray to the nearest x edge (and y edge). Note I say magnitude because we will be working with SCALAR values. I got confused here trying to figure this part out because I kept thinking about vectors.

2. What is the magnitude from one x edge to the next x edge? (And y edge to the next y edge.) We aren’t trying to figure out the vertical/horizontal distance, that’s just a unit of 1. Instead, we need to calculate the hypotenuse of the triangle formed from the ray going from one edge to the other. Again, when calculating the magnitude for x, the horizontal base leg length of the triangle formed will be 1. And when calculating the magnitude for y, the vertical height of the triangle will be 1. Drawing this on paper helps.

Once we have these values we can start running the DDA algorithm. Selecting the nearest x or y edge, testing if that edge has a filled block associated with it, and if not, incrementing some values. We’ll have two scalar values ‘racing’ one another.

Okay, let’s answer the second question first. Based on the direction, what is the magnitude/hypotenuse length from one x edge to another? We know horizontally, the length is 1 unit. That means we need to scale our ray such that the x component will be 1.

What value multiplied by x will result in 1? The answer is the inverse of x! So if we multiply the x component of the ray by 1/x, we’ll need to do the same for y to maintain the ray direction. This gives us a new vector:

v = [ rayDir.x * 1 / rayDir.x , rayDir.y * 1 / rayDir.x]

We already know the result of the calculation for the first component, so we can simplify:

v = [ 1, rayDir.y / rayDir.x ]

Then get the magnitude using the Pythagorean theorem.

    float magBetweenXEdges = ofVec2f(1, rayDir.y * (1.0 / rayDir.x) ).length();
    float magBetweenYEdges = ofVec2f(rayDir.x * (1.0 / rayDir.y), 1).length();

Calculating the Starting Magnitude

For this we have to calculate the length from the player’s position to the nearest x and y edges. To do this, we get the relative position from within the cell then use that as a scale for the magnitude between the edges.

Here we keep track of which direction the ray is going by setting dirStepX which will be used to jump from cell to cell in the correct direction.

For x, we have a triangle with a base horizontal length of 1 and some magnitude for its hypotenuse which we recently figured out. Now, based on the relative position of the user within a cell, how long is the hypotenuse of this triangle?

If the x position of the player was 0.2:

magToXedge = (0 + 1 - 0.2) * magBetweenXedges
           = 0.8 * magBetweenXedges

This means we are 80% away from the closest x edge, thus we need 80% of the hypotenuse. The same method is used to calculate the starting position of the y edge.

    if(rayDir.x > 0){
      magToXedge = (worldIndexX + 1.0 - pos.x) * magBetweenXEdges;
      dirStepX = 1;
    }
    else{
      magToXedge = (pos.x - worldIndexX) * magBetweenXEdges;
      dirStepX = -1;
    }

    if(rayDir.y > 0){
      magToYedge = (worldIndexY + 1.0 - pos.y) * magBetweenYEdges;
      dirStepY = 1;
    }
    else{
      magToYedge = (pos.y - worldIndexY) * magBetweenYEdges;
      dirStepY = -1;
    }

Running the Search

Here x and y values ‘race’. If one length is less than the other, we increase the shortest, increment 1 unit in that direction and check again. We end the loop as soon as we find a non-empty cell edge.

Note, we keep track of not only the last cell we were working with, but also which edge (x or y) that we hit with sideHit.

    int sideHit;

    do{
      if(magToXedge < magToYedge){
        magToXedge += magBetweenXEdges;
        worldIndexX += dirStepX;
        sideHit = 0;
      }
      else{
        magToYedge += magBetweenYEdges;
        worldIndexY += dirStepY;
        sideHit = 1;
      }
    }while(worldMap[worldIndexX][worldIndexY] == 0);

Selecting a Color & Lighting

We kept track of the non-empty index we hit, so we can use this to index into the map and get the associated color for the cell.

    ofColor wallColor;
    switch(worldMap[worldIndexX][worldIndexY]){
      case 1:wallColor = ofColor(255,255,255);break;
      case 2:wallColor = ofColor(0,255,255);break;
      case 3:wallColor = ofColor(255,0,255);break;
    }

If all faces of the walls were a solid color, it would be difficult to determine edges and where the walls were in relation to each other. So to provide a very rudimentary form of lighting, any time the rays hit an x edge, we darken the current color.

    wallColor = sideHit == 0 ? wallColor : wallColor/2.0f;
    ofSetColor(wallColor);

Preventing Distortion

The farther the wall is from the player, the smaller the vertical slice needs to be on the screen. Keep in mind when viewing a wall/plane dead on, the length of the rays further out will be longer resulting in shorter ‘scanlines’. This isn’t desired since it will warp our representation of the world.

***Update***
After I posted this I was e-mailed how this part works, here’s an image to help clarify things:raycast distortion fix

Essentially, we take the base of the ‘lower’ triangle and divide it by the base of the ‘upper’ triangle. This value is actually the same proportionally as the perpendicular to camera line to the direction vector.

    float wallDist;

    if(sideHit == 0){
      wallDist = fabs((worldIndexX - pos.x + (1.0 - dirStepX) / 2.0) / rayDir.x);
    }
    else{
      wallDist = fabs((worldIndexY - pos.y + (1.0 - dirStepY) / 2.0) / rayDir.y);
    }

Calculating Line Height

If a ray was 1 unit from the wall, we draw a line that fills the entire length of the screen.

To provide a proper FPS feel, we center the vertical lines in screen space along Y. The middle of the cells will be at eye height.

To calculate how long the vertical line needs to be on the screen, we divide the viewport height by the distance from the wall. So if the wall was 1 unit from the player, it will fill up the entire vertical slide.

At this point we need to center our line height in our viewport line. Draw two vertical lines down, the smaller one centered next to the larger one. To get the position of top of the smaller one relative to the larger one, you just have to divide both lines halfway horizontally then subtract from that the half that remained from the smaller line. This gives us the starting point for the line we have to draw.

    float lineHeight = height/wallDist;
    float startY = height/2.0 - lineHeight/2.0;

Rendering a Slice

We finish our render loop by drawing a single pixel width vertical line for this x scanline iteration and we center it in the viewport. You can always clamp the lines to the bottom of the screen and make the player feel like they are mouse in a maze (:

    ofLine(x, startY, x, startY + lineHeight);
  }// finish loop
}// finish draw()

Keyboard Control

I’m still learning OpenFrameworks, so my implementation for keyboard control works, but it’s a hack. I’m too embarrassed to post the code until I resolve the issue I’m dealing with. In the mean time, I leave the task of updating position and rotation of the player up to the reader.

References

[1] http://www.permadi.com/tutorial/raycast/index.html
[2] http://lodev.org/cgtutor/raycasting.html
[3] http://en.wikipedia.org/wiki/Ray_casting

Game 1 for 1GAM 2014 – Asteroids January 14, 2014

Posted by Andor Salga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

Asteroids

Skip the blog and play Asteroids!

Back in November, I picked up a contract to develop Asteroids in Processing.js. After developing the game, I lost touch with my contractee and thus $150. Soon after, I went on vacation and when I returned, I decided to polish off what I had and place it as a 1GAM entry. I added some audio, gave it a more authentic look and feel, added more effects and the like. So, this is my official release for my first 2014 1GAM!

Implementing PShader.set() October 5, 2013

Posted by Andor Salga in JavaScript, Processing, Processing.js, PShader.
add a comment

I was in the process of writing ref tests for my implementation of PShader.set() in Processing.js, when I ran into a nasty problem. PShader.set() can take on a variety of types including single floats and integers to set uniform shader variables. For example, we can have the following:

pShader.set("i", 1);
pShader.set("f", 1.0);

If the second argument is an integer, we must call uniform1i on the WebGL context, otherwise uniform1f needs to be called. But in JavaScript, we can’t distinguish between 1.0 and 1. I briefly considered modifying the the interface for this method, but knew there was a better solution. No, the last thing I wanted was to change the interface. So I just thought about it until I came up with an interesting solution. I figured, why not call both uniform1i and uniform1f right after each other? What would happen? It turns out, it works! It seems one will always fail and the other will succeed, leaving us with the proper uniform set!

Developing Shaders in Vim August 7, 2013

Posted by Andor Salga in Vim.
add a comment

I’ve been recently writing shaders in Vim and found some fun shortcuts to help in development.

Frequently I’ll want both vertex and fragment shaders open in the same terminal window, but I’ll want Vim to have multiple windows. In the man pages, I found a simple command line switch to open both files, one on top of another. This is known as staked windows.

vim -o vert.glsl frag.glsl

But I like seeing the shaders side-by-side, especially if there are a bunch of varying variables, so use an uppercase “O” switch instead.

vim -O vert.glsl frag.glsl

Making it Pretty

I finally decided to add some color to vim, so I looked around for a syntax file for GLSL and found one at vim.org. Download and install it to get syntax highlighting.

So after starting vim, I like running these lastline commands to set the line numbers and get the nice GLSL syntax highlighting:

:set nu
:syntax on
:set syntax=glsl

Make sure there isn’t a space between “=” and “glsl”, otherwise you’ll get an “Unknown option” error.

Navigation

Now that the windows are all set up, you can start navigating between them and editing your shaders. To manipulate the windows you use CTRL-W which enters the window command mode, then add an extra character for what you want. For example, to toggle the cursor between the windows you can use CTRL-W w. This involves holding down CTRL, pressing ‘w’ letting go of both keys and pressing ‘w’ again. To close the current window, you can use CTRL-W q in the same way. You can of course still use the lastline mode commands :wq, or :q!

If you ever need to split up the windows further you have a few options. You can use CTRL-W s to split a window horizontally or CTRL-W v to split vertically. You can also go to lastline mode and type:

:split

Or if you want to split the current window vertically,

:vsplit

I found some of the more useful window command mode commands are these:
CTRL-W w – Go to the next window
CTRL-W q – Quit the current window
CTRL-W s – Split the current window horizontally
CTRL-W v – Split the current window vertically
CTRL-W n – Create a new empty window
CTRL-W = – Make all windows equal size

But better yet, start Vim and type :help windows which will give you more information than you’d ever want about windows!

Experimenting with Normal Mapping using PShaders May 19, 2013

Posted by Andor Salga in GLSL, Processing, PShader.
4 comments

Normal_Mapping_PShader

Just over half a year ago, I wrote a blog about Experimenting with Normal Mapping. I wrote a simple Processing sketch that demonstrated the technique and I also wrote a hacked up Processing.js sketch to squeeze out some extra few frames/sec on the browser side of things.

This long weekend, I found myself with some extra time to hack on something. I remember that several weeks ago, Processing 2 introduced PShaders, which at the time I found exciting, but I didn’t have a chance to look at them. So this weekend I decided to take a look into this new PShader object. I haven’t touched shaders in a while, so I brushed up on them by reading the PShader tutorial on the Processing page.

After my refresher, I got to hacking and re-wrote my normal mapping sketch. Here is my complete sketch along with vertex and fragment shaders:

The sketch:

PImage diffuseMap;
PImage normalMap;

PShape plane;

PShader normalMapShader;

void setup() {
  size(256, 256, P3D);
  
  diffuseMap = loadImage("crossColor.jpg");
  normalMap = loadImage("crossNormal.jpg");
  
  plane = createPlane(diffuseMap);
  
  normalMapShader = loadShader("texfrag.glsl", "texvert.glsl");
  shader(normalMapShader);
  normalMapShader.set("normalMap", normalMap);
}

void draw(){
  background(0);
  translate(width/2, height/2, 0);
  scale(128);
  shape(plane);
}

void mouseMoved(){
  updateCursorCoords();
}

void mouseDragged(){
  updateCursorCoords();
}

void updateCursorCoords(){
  normalMapShader.set("mouseX", (float)mouseX);
  normalMapShader.set("mouseY", height - (float)mouseY);
}

void mousePressed(){
  normalMapShader.set("useSpecular", 1);
}

void mouseReleased(){
  normalMapShader.set("useSpecular", 0);
}

PShape createPlane(PImage tex) {
  textureMode(NORMAL);
  PShape sh = createShape();
  sh.beginShape(QUAD);
  sh.noStroke();
  sh.texture(tex);
  sh.vertex( 1, -1, 0, 1, 0);
  sh.vertex( 1,  1, 0, 1, 1);    
  sh.vertex(-1,  1, 0, 0, 1);
  sh.vertex(-1, -1, 0, 0, 0);
  sh.endShape(); 
  return sh;
}

The vertex shader:

#define PROCESSING_TEXTURE_SHADER

uniform mat4 transform;
uniform mat4 texMatrix;

attribute vec4 vertex;
attribute vec2 texCoord;

varying vec4 vertTexCoord;

void main() {
  gl_Position = transform * vertex;
  vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);
}

The fragment shader:

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

#define PI 3.141592658

uniform sampler2D normalMap;
uniform sampler2D colorMap;

uniform int useSpecular;

uniform float mouseX;
uniform float mouseY;

varying vec4 vertTexCoord;

const vec3 view = vec3(0,0,1);
const float shine = 40.0;

void main() {
  // Convert the RGB values to XYZ
  vec4 normalColor  = texture2D(normalMap, vertTexCoord.st);
  vec3 normalVector = vec3(normalColor - vec4(0.5));
  normalVector = normalize(normalVector);

  vec3 rayOfLight = vec3(gl_FragCoord.x - mouseX, gl_FragCoord.y - mouseY, -150.0);
  rayOfLight = normalize(rayOfLight);

  float nDotL = dot(rayOfLight, normalVector);

  vec3 finalSpec = vec3(0);

  if(useSpecular == 1){
    vec3 reflection = normalVector;
    reflection = reflection * nDotL * 2.0;
    reflection -= rayOfLight;
    float specIntensity = pow( dot(reflection, view), shine);
    finalSpec = vec3(1.0, 0.5, 0.2) * specIntensity;
  }

  float finalDiffuse = acos(nDotL)/PI;

  gl_FragColor = vec4(finalSpec + vec3(texture2D(colorMap, vertTexCoord.st) * finalDiffuse), 1.0);
}

Performance

I found using PShaders very exciting, since I could place all this computational work on the GPU rather than CPU. So I wondered about the performance vs my old sketch. I’m on a mac mini, and after running tests I found my original normal mapping sketch ran at 30FPS with diffuse lighting and it ran at 21FPS using diffuse and specular lighting. Using PShaders, I was able to render specular and diffuse lighting at a solid 60FPS. Keep in mind the first sketch is 2D and my new one is 3D, so I’m not sure if that comparison is fair.

No Demo? :(

Sadly, Processing no longer allows exporting to applets, so I can’t even post a demo running in Java. The perfect solution would be to implement the PShader in Processing.js, which is something I’m considering….

Game 2 for 1GAM: Tetrissing May 17, 2013

Posted by Andor Salga in 1GAM, Game Development, Open Source, Processing, Processing.js.
add a comment

tetrissing

Click to play!
View the source

I’m officially releasing Tetrissing for the 1GAM challenge. Tetrissing an open source Tetris clone I wrote in Processing.

I began working on the game during Ludum Dare 26. There were a few developers hacking on LD26 at the Ryerson Engineering building, so I decided to join them. I was only able to stay for a few hours, but I managed to get the core mechanics done in that time.

After I left Ryserson, I did some research and found most of the Tetris clones online lacked some basic features and has almost no polish. I wanted to contribute something different than what was already available. So, that’s when I decided to make this one of my 1GAM games. I spent the next 2 weeks fixing bugs, adding features, audio, art and polishing the game.

I’m fairly happy with what I have so far. My clone doesn’t rely on annoying keyboard key repeats, and it still allows tapping the left or right arrow keys to move a piece 1 block. I added a ‘ghost’ piece feature and kickback feature, pausing, restarting, audio and art. There was nothing too difficult about all this, but it did require work. So, in retrospect I want to take on something a bit more challenging for my next 1GAM game.

Lessons Learned

One mistake I made when writing this was over complicating the audio code. I used Minim for the Processing version, but I had to write my own implementation for the Processing.js version. I decided to look into the Web Audio API. After fumbling around with it, I did eventually manage to get it to work, but then the sound didn’t work in Firefox. Realizing that I made a simple matter complex, I ended up scrapping the whole thing and resorting to use audio tags, which took very little effort to get working. The SoundManager I have for JavaScript is now much shorter, easier to understand, and still gets the job done.

Another issue I ran into was a bug in the Processing.js library. When using tint() to color my ghost pieces, Pjs would refuse to render one of the blocks that composed a Tetris piece. I dove into the tint() code and tried fixing it myself, but I didn’t get too far. After taking a break, I realized I didn’t really have the time to invest in the Pjs fix and also came up with a dead-simple work-around. Since only the first block wasn’t rendering, I would render that first ‘invisible’ block off screen, then re-render the same block onscreen the second time. Fixing the issue in Pjs would have been nice. But that wasn’t what my main goal was.

Lastly, I was reminded how much time it takes to polish a game. I completed the core mechanics of Tetrissing in a few hours, but it took another 2 weeks to polish it!

If you like my work, please star or fork my repository on Github. Also, please post any feedback, thanks!

Game 1 for 1GAM: Hello Hanoi! February 27, 2013

Posted by Andor Salga in 1GAM, Game Development, Objective-C.
add a comment

hello_hanoi

My first iPhone game is available for download on the App Store. I call it Hello Hanoi!. It’s my “Hello World” for iPhone game development via Towers of Hanoi. My motivation to code and released the game came from the 1GAM challenge. A year-long event that dares developers to release 1 game every month.

When planning the game, I wanted to keep the scope very tight. At the end of it, I wanted a polished, tightly scoped game over a feature-rich, unpolished game. I think I managed to achieve that, but for my next game, I’d like to try the opposite.

The Pixels

I had a difficult time getting a hold of artists, so I decided to just learn pixel art and make all the assets myself. To begin making the art, I tried a few applications: Gimp, Aseprite, and Pixen. Gimp had issues with leaving artifacts on the canvas. Aseprite had problems with cursor position and I felt the UI was awkward. Pixen kept crashing. It was a bit frustrating so I restarted and re-installed them all. I launched Pixen first, and it seemed to work, so I stuck with it.

The result of making all the art myself shifted the release date dramatically. I should have released at the end of January and it’s almost March. At the same time, I had a lot of fun learning pixel art and learning about art in general, such as attention to color, lighting, shadows, and mood.

One particular level was very tedious to create and I soon realized I could generate the art instead! So, I launched Processing and wrote a small sketch to create a series of city buildings. It doesn’t look as good compared to what I could have done by hand, but it was a lot faster to create with this method.

The Code

The code was straightforward, but I did have to learn a few iOS specifics. How do I write a plist to storage? How do I use the new storyboards? Like the art, it was at times frustrating, but in the end it was worth it.

One mistake I did make was over-generalizing. I had the idea that it would be neat to support n number of discs. I wrote all the code to handle the rendering and positioning of the discs, but then realized it didn’t fit into the game. Players would get bored before they reached that many discs, and the necessary horizontal scaling of the disc assets would break the art style. So, I ended up taking the code out. Next time, I’ll try to catch myself over-generalizing.

I had a fun time making my first 1GAM game and I look forward to trying something new for the next one!

Automating my WorkFlow with TexturePacker’s Command Line Options February 17, 2013

Posted by Andor Salga in BNOTIONS, Game Development, Shell Scripting, TexturePacker.
add a comment

At work I use TexturePacker quite a bit to generate sprite sheets for UIToolkit from within Unity. Each sprite sheet ends up corresponding with a layer in UIToolkit, so the sheets we use are typically named “layer0.png”, “layer02x.png”, “layer0.txt”, “layer02x.txt”, where the .txt files are the metadata files.

During development, I’ll be interrupted every now and then with new and improved assets that need to pushed up into the Unity application. Once I have received the asset, I must take a series of steps to actually use that asset in Unity.

I open the TexturePacker .TPS file and drag the asset in. The sprites get re-arranged then I would perform the following:

  • Set the sheet filename for use with retina mobile devices, adding “2x”
  • Set the width of the sheet to 2048
  • Set the height of the sheet to 2048
  • Set scaling to 1.0
  • Click publish

I would then need to do the same thing for non-retina devices. UIToolkit will automatically select the appropriate sized sprite sheet to use based on the “2x” extension.

  • Set the data filename for use with non-retina mobile devices, removing “2X”
  • Set the width of the sheet to 1024
  • Set the height of the sheet to 1024
  • Set scaling to 0.5
  • Click publish

Once TexturePacker creates the 4 files (layer0.png, layer0.json, layer02x.png, layer02x.json ), I would rename the .json files to .txt to keep Unity happy. This process would be done over and over again, until insanity.

Automating Things

This weekend I had some time to investigate a way to automate this tedious process. I wanted a solution such that after editing the .TPS file, I could simply run a script that would take care of the rest of the work. I began by looking into TexturePacker’s command line options. After some tinkering, I came up with a short script that reduces the number of click and edits that I need to make.

I placed the script within the directory that contains all our assets. However, since the output sheets go into a folder that only contain the sheets and data files, so I need to reference these paths relative from where the shell script lives. So, this would be a script for one layer for UIToolkit:

#!/bin/bash
TexturePacker --sheet ../Resources/layer0.png    --data ../Resources/layer0.txt    --scale 0.5 --width 1024 --height 1024 --format unity layer0.tps
TexturePacker --sheet ../Resources/layer02x.png  --data ../Resources/layer02x.txt  --scale 1.0 --width 2048 --height 2048 --format unity layer0.tps

Note that I can omit changing any of the options that are already set in the .TPS file such as padding, rotation, algorithm, etc. This helps keep the script short and sweet.

The options all correspond with the changes that I mentioned previously. One interesting thing to note is the –format option which prevents needing to rename the .json data file to .txt. You might ask why I didn’t just set this option in the TexturePacker GUI. The reason is, I had just learned about this option after looking over the command line help!

I had created a script that I could run through command line, but I wanted the script to be easier to use. If I had just edited the .TPS file, I would be in Finder, and being able to double-click the script would be nicer than opening a terminal to execute it. Running the script through Finder would simply return an error since the terminal would be in my home directory.

To fix this issue, I had to modify the script a bit more:

#!/bin/bash
DIR="$( cd "$( dirname "$0" )" && pwd )"
TexturePacker --sheet "$DIR"/../Resources/layer0.png    --data "$DIR"/../Resources/layer0.txt    --scale 0.5 --width 1024 --height 1024 --format unity  "$DIR"/layer0.tps
TexturePacker --sheet "$DIR"/../Resources/layer02x.png  --data "$DIR"/../Resources/layer02x.txt  --scale 1.0 --width 2048 --height 2048 --format unity  "$DIR"/layer0.tps

When the shell script runs, we get the directory name where the script lives, cd into that directory and call pwd and assign that value into our DIR variable. This part took a bit more time as I learned spaces on either side of the equals sign will confuse bash, so I had to leave those out.

Now, if a new asset is sent to me, I open the .TPS file, add the file and save. Then I can run this script by a simple double-click. Tada!

Next steps

Using this method, I need to create a shell script for every UIToolkit layer. This isn’t nearly as bad as it sounds since we typically only have 2-3 layers. But, what I’d like to do in the future is investigate AppleScript. AppleScript can help convert a shell script into an app that allows files to be dropped on that app. If I did this, I could drop the .TPS file onto app, then the the script could extract the filename and do the rest. This would prevent needing a script for every layer.

Sprite Sheet Guide Generator January 13, 2013

Posted by Andor Salga in Game Development, Pixel Art, Processing.js.
add a comment

sprite sheet guide generator
Click the image above to get the tool.

I began using Pixen to start making pixel art assets for a few games I’m developing for the 1 game a month challenge.

While pixelating away, I found myself creating a series of sprite sheets for bitmapped fonts. I created one here, then another, but by then I found myself running into the same problem: before I began drawing each glyph, I first had to make sure I had a nice grid to keep all the characters in line. Each font used a different number of pixels, so I had to start from scratch every time. You can imagine that counting rows and columns of pixels and drawing each line separating glyphs is extremely tedious. I needed something to eliminate this from my workflow.

I decided to create a decent tool that took away this painful process. What I needed was a sprite sheet guide generator, a tool that created an image of a grid based on these inputs:

  • Number of sprites per row
  • Number of sprites per column
  • Width of the sprite
  • Height of the sprite
  • Border width and color

I used Processing.js to create the tool and I found the results to be quite useful. After almost finishing the tool, I realized I could alternate the sprite background colours to help me even more when I’m drawing down at the pixel level, so I implemented that as well.

You can run the tool right here or you can click on the image at the start of this post.

New Year Habits December 31, 2012

Posted by Andor Salga in Habits, Personal Development.
1 comment so far

Last year I decided to begin exercising. Not for a few months or a couple years, but for the rest of my life. I made a resolution to do some form of physical or mental exercise for 15 minutes a day every day. 15 minutes may sound meager, but by making a small habit change, I could later on extend my time without too much difficulty. I managed to keep my commitment to bike, run, meditate, or do yoga every day.

I had some slips when I missed a few days while moving apartments and traveling, but this small habit change got me to start running, and I eventually ran my first half marathon (: So, it worked out well.

If you’re setting some New Year’s resolutions, here are some tips on forming your new habits:

Make it quantifiable

You must be able to measure your goal so it can serve as an indication that you are on the right track. Either you did it or you didn’t, there should be no ambiguity. Setting a number to your goal is the easiest way to prevent this ambiguity. My metric was 15 minutes. Either I spent those 15 minutes on myself or I didn’t. So figure out: How long? How many pages? How many phone calls? What time? Put a number on it so when you are done, you can tick it off.

Keep a log

‘Ticking it off’ is an important step in habit forming because it helps motivate you and, like I mentioned before, it shows your progress. Jerry Seinfeld has a productivity secret: Don’t Break the Chain. This is a great tool that I use to help me on working on my goals. Upon completing your task, tick it off your log to track your progress.

When you fail…Start over

Don’t be hard on yourself if/when you fail, simply start over. Last year I did some traveling, which threw my routine out of whack. I ended up missing some days of exercise. I could have beat myself up over this, but that would have been counter-productive. If you go back to your old habits, simply start over.

Do what works for you

Don’t set yourself up for failure by doing something you hate. Forming new habits can be difficult, so there’s no need to make it even more challenging. You know yourself best, so choose a task that is somewhat enjoyable. Personally, I love stationary bikes. It’s hands-free, I can listen to music, and I can enjoy the scenery while I pedal away. So, if you want to succeed in your new habits, make sure to do what works for you.

Good luck! (:

Follow

Get every new post delivered to your Inbox.