jump to navigation

BitCam.Me December 25, 2012

Posted by Andor Salga in Open Source, Pixel Art, Processing.js.
add a comment

bitcam_me_asalga

Check this out: I created a WebRTC demo that pixelates your webcam video stream: BitCam.me.

I recently developed a healthy obsession with pixel art and I began making some doodles in my spare time. Soon after I started doing this, I wondered what it would be like to generate pixel art programmatically. So I fired up Processing and made a sketch that did just that. The sketch pixelized a PNG, taking the average pixel color of the nearest neighbor pixels.

After completing that sketch, I realized I could easily upgrade what I had written to use WebRTC instead of a static image. I thought it would be much more fun and engaging to use this demo if it was in real-time. I added the necessary JavaScript and I was pretty excited about it (:

I then found SuperPixelTime and saw it did something similar to what I had written. But unlike my demo, it had some nice options to change the color palette. I read the code and figured making those changes wouldn’t be difficult either and soon had my own controls for changing palettes.

I had a great time making the demo. Let me know what you think!

Enjoy!

Engage3D Hackathon Coming Soon! December 8, 2012

Posted by Andor Salga in Kinect, Open Source, point cloud, webgl.
add a comment

A month ago, Bill Brock and I pitched our idea to develop an open source 3D web-based videoconferencing system for the Mozilla Ignite Challenge over Google chat. Will Barkis from Mozilla recorded and moderated the conversation and then sent it off to a panel of judges. The pitch was to receive a slice of $85,000 that was being doled out to the winners of the Challenge.

After some anticipation, we got word that we were among the winners. We would receive $10,000 in funding to support the development of our prototype. Our funding will cover travel expenses, accommodations, the purchasing of additional hardware and the development of the application itself.

We will also take on two more developers and have a hackathon closer to the end of the month. Over the span of four days we will iterate on our original code and release something more substantial. The Company Lab in Chattanooga has agreed to provide us with a venue to hack and a place to plug into the network. Both Bill and I are extremely excited to get back to hacking on Engage3D and to get back to playing with the gig network.

We will keep you updated on our Engage3D progress, stay tuned!

Developing engage3D – Phase 1 October 25, 2012

Posted by Andor Salga in Kinect, Open Source, point cloud, webgl.
1 comment so far


A point cloud of Bill Brock rendered with WebGL

I am working with Bill Brock (a PhD student from Tennessee) to develop an open source 3D video conferencing system that we are calling engage3D. We are developing this application as part of Mozilla’s Ignite Challenge.

Check out the video!

During the past few days, Bill and I made some major breakthroughs in terms of functionality. Bill sent me Kinect depth and color data via a server he wrote. We then managed to render that data in the browser (on my side) using WebGL. We are pretty excited about this since we have been hacking away for quite some time!

There has been significant drawbacks to developing this application over commodity internet, I managed to download over 8Gb of data in only one day while experimenting with the code. Hopefully, this will soon be able to be ported to the GENI resources in Chattanooga, TN for further prototyping and testing.

Even though we are still limited to a conventional internet connection, we want to do some research into data compression. Also, we have been struggling with calibrating the Kinect. This is also something we hope to resolve soon.

Experimenting with Normal Mapping September 26, 2012

Posted by Andor Salga in Game Development, Open Source, Processing, Processing.js.
Tags: ,
add a comment


Click me!

** Update March 20 2014 **
The server where this sketch was being hosted went down. I recently made several performance improvements to the sketch and re-posted it on OpenProcessing.

Quick note about the demo above. I’m aware the performance on Firefox is abysmal and I know it’s wonky on Chrome. Fixes to come!

I’ve heard and seen the use of normal mapping many times, but I have never experimented with it myself, so I decided I should, just to learn something new. Normal mapping is a type of bump mapping. It is a way of simulating bumps on an object, usually in a 3D game. These bumps are simulated with the use of lights. To get a better sense of this technique, click on the image above to see a running demo. The example uses a 2D canvas and simulates Phong lighting.

So why use it and how does it work?

The great thing with normal mapping is that you can simulate vertex detail of a simplified object without providing the extra vertices. By only providing the normals and then lighting the object, it will seem like the object has more detail than it actually does. If we wanted to place the code we had in a 3D game, we would only need 4 vertices to define a quad (maybe it could be a wall), and along with the normal map, we could render some awesome Phong illumination.

So, how does it work? Think of what a bitmap is. It is just a 2D map of bits. Each pixel contains a color components making up a the entire graphic. A normal map is also a 2D map. What makes normal maps special is how their data is interpreted. Instead of each pixel holding a ‘color’ value, each pixel actually stores a vector that defines where the corresponding part in the color image is ‘facing’ also known as our normal vector.

These normals need to be somehow encoded into an image. This can be easily done since we have three floating point components (x,y,z) that need to be converted into three 8 or 16 bit color components (r,g,b). When I began playing with this stuff, I wanted to see what the data actually looked like. I first dumped out all the color values from the normal map and found the range of the data:

Red (X) ranges from 0 to 255
Green (Y) ranges from 0 to 255
Blue (Z) ranges from 127 to 255

Why is Z different? When I first looked at this, it seemed to me that each component needs to be subtracted by 127 so the values map to their corresponding negative number lines in a 3D coordinate system. However, Z will always point directly towards the viewer, never away. If you do a search for normal map images, you will see the images are blue in color. So it would make sense why the blue is pronounced. The normal is always pointing ‘out’ of the image. If it ranged from 0-255, subtracting 127 would result in a negative number which doesn’t make sense. So, after subtracting each by 127:

X -127 to 128
Y -127 to 128
Z 0 to 128

The way I picture this is I imagine that all the normals are contained in a translucent semi-sphere with the semi-sphere’s base lying on the XY-plane. But since the Z range is half of that of X and Y, it would appear more like a squashed semi-sphere. This tells us the vectors aren’t normalized. But that can be solved easily with normalize(). Once normalized, they can be used in our lighting calculations. So now that we have some theoretical idea of how this rendering technique works, let’s step through some code. I wrote a Processing sketch, but of course the technique can be used in other environments.

// Declare globals to avoid garbage collection

// colorImage is the original image the user wants to light
// targetImage will hold the result of blending the 
// colorImage with the lighting.
PImage colorImage, targetImage;

// normalMap holds our 2D array of normal vectors. 
// It will be the same dimensions as our colorImage 
// since the lighting is per-pixel.
PVector normalMap[][];

// shine will be used in specular reflection calculations
// The higher the shine value, the shinier our object will be
float shine = 40.0f;
float specCol[] = {255, 128, 50};

// rayOfLight will represent a vector from the current 
// pixel to the light source (cursor coords);
PVector rayOfLight = new PVector(0, 0, 0);
PVector view = new PVector(0, 0, 1);
PVector specRay = new PVector(0, 0, 0);
PVector reflection = new PVector(0, 0, 0);

// These will hold our calculated lighting values
// diffuse will be white, so we only need 1 value
// Specular is orange, so we need all three components
float finalDiffuse = 0;
float finalSpec[] = {0, 0, 0};

// nDotL = Normal dot Light. This is calculated once
// per pixel in the diffuse part of the algorithm, but we may
// want to reuse it if the user wants specular reflection
// Define it here to avoid calculating it twice per pixel
float nDotL;

void setup(){
  size(256, 256);

  // Create targetImage only once
  colorImage = loadImage("data/colorMap.jpg");
  targetImage = createImage(width, height, RGB);

  // Load the normals from the normalMap into a 2D array to 
  // avoid slow color lookups and clarify code
  PImage normalImage =  loadImage("data/normalMap.jpg");
  normalMap = new PVector[width][height];
  
  // i indexes into the 1D array of pixels in the normal map
  int i;
  
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      i = y * width + x;

      // Convert the RBG values to XYZ
      float r = red(normalImage.pixels[i]) - 127.0;
      float g = green(normalImage.pixels[i]) - 127.0;
      float b = blue(normalImage.pixels[i]) - 127.0;
      
      normalMap[x][y] = new PVector(r, g, b);
      
      // Normal needs to be normalized because Z
      // ranged from 127-255
      normalMap[x][y].normalize();
    }
  }
}

void draw(){
  // When the user is no longer holding down the mouse button, 
  // the specular highlights aren't used. So reset the values
  // every frame here and set them only if necessary
  finalSpec[0] = 0;
  finalSpec[1] = 0;
  finalSpec[2] = 0;
  
  // Per frame we iterate over every pixel. We are performing
  // per-pixel lighting.
  for(int x = 0; x < width; x++){
    for(int y = 0; y < height; y++){
      
      // Simulate a point light which means we need to
      // calculate a ray of light for each pixel. This vector
      // will go from the light/cursor to the current pixel.
      // Don't use PVector.sub() because that's too slow.
      rayOfLight.x = x - mouseX;
      rayOfLight.y = y - mouseY;

      // We only have two dimensions with the mouse, so we
      // have to create third dimension ourselves.
      // Force the ray to point into 3D space down -Z. 
      rayOfLight.z = -150;
      
      // Normalize the ray it can be used in a dot product
      // operation to get a sensible values(-1 to 1)
      // The normal will point towards the viewer
      // The ray will be pointing into the image
      rayOfLight.normalize();
      
      // We now have a normalized vector from the light
      // source to the pixel. We need to figure out the
      // angle between this ray of light and the normal
      // to calculate how much the pixel should be lit.

      // Say the normal is [0,1,0] and the light is [0,-1,0]
      // The normal is pointing up and the ray, directly down.
      // In this case, the pixel should be fully 100% lit
      // The angle would be PI

      // If the ray was [0,-1,0] it would
      // not contribute light at all, 0% lit
      // The angle would be 0 radians

      // We can easily calculate the angle by using the
      // dot product and rearranging the formula.
      // Omitting  magnitudes since they are = 1
      // ray . normal = cos(angle)
      // angle = acos(ray . normal)

      // Taking the acos of the dot product returns
      // a value between 0 and PI, so we normalize
      // that and scale to 255 for the color amount     
      nDotL = rayOfLight.dot(normalMap[x][y]);
      finalDiffuse = acos(nDotL)/PI * 255.0;
      
      // Avoid more processing by only calculating
      // specular lighting if the users wants to do it.
      // It is fairly processor intensive.
      if(mousePressed){
        // The next 5 lines calculates the reflection vector
        // using Phong specular illumination. I've written
        // a detailed blog about how this works: 
        // http://asalga.wordpress.com/2012/09/23/understanding-vector-reflection-visually/ 
        // Also, when we have to perform vector subtraction
        // as part of calculating the reflection vector,
        // do it manually since calling sub() is slow.
        reflection = new PVector(normalMap[x][y].x,
                                 normalMap[x][y].y,
                                 normalMap[x][y].z);
        reflection.mult(2.0 * nDotL);
        reflection.x -= rayOfLight.x;
        reflection.y -= rayOfLight.y;
        reflection.z -= rayOfLight.z;
        
        // The view vector points down (0, 0, 1) that is,
        // directly towards the viewer. The dot product 
        // of two normalized vector returns a value from
        // (-1 to 1). However, none of the normal vectors
        // point away from the user, so we don't have to
        // deal with making sure the result of the dot product 
        // is negative and thus a negative specular intensity.
        
        // Raise the result of that dot product value to the
        // power of shine. The higher shine is, the shinier
        // the surface will appear.        
        float specIntensity = pow(reflection.dot(view),shine);
        
        finalSpec[0] = specIntensity * specCol[0];
        finalSpec[1] = specIntensity * specCol[1];
        finalSpec[2] = specIntensity * specCol[2];
      }
      
      // Now that the specular and diffuse lighting are
      // calculated, they need to be blended together
      // with the original image and placed in the
      // target image. Since blend() is too slow, 
      // perform our own blending operation for diffuse.
      targetImage.set(x,y, 
        color(finalSpec[0] + (finalDiffuse *   
                            red(colorImage.get(x,y)))/255.0,

              finalSpec[1] + (finalDiffuse * 
                          green(colorImage.get(x,y)))/255.0,

              finalSpec[2] + (finalDiffuse *  
                         blue(colorImage.get(x,y)))/255.0));
    }
  }
  
  // Draw the final image to the canvas.
  image(targetImage, 0,0);
}

Whew! Hope that was a fun read, Let me know what you think!

Understanding Vector Reflection Visually September 23, 2012

Posted by Andor Salga in Game Development, Math.
8 comments

I started experimenting with normal maps when I came to the subject of specular reflection. I quickly realized I didn’t understand how the vector reflection part of the algorithm worked. It prompted me to investigate exactly how all this magic was happening. Research online didn’t prove very helpful. Forums are littered with individuals throwing around the vector reflection formula with no explanation whatsoever. This forced me to step through the formula piecemeal until I could make sense of it. This blog post attempts to guide readers through how one vector can be reflected onto another via logic and geometry.

Vector reflection is used in many graphics and gaming applications. As I mentioned, it is an important part of normal mapping since it is used to calculate specular highlights. There are other applications other than lighting, but let’s use the lighting problem for illustration.

Let’s start with two vectors. We would know the normal of the plane we are reflecting off of along with a vector pointing to the light source. Both of these vectors are normalized.


L is a normalized vector pointing to our light source. N is our normal vector.

We are going to work in 2D, but the principle works in 3D as well. What we are trying to do is figure out: if a light in the direction L hits a surface with normal N, what would be the reflected vector? Keep in mind, all the vectors here are normalized.

I drew R here geometrically, but don’t assume we can simply negate the x component of L to get R. If the normal vector was not pointing directly up, it would not be that easy. Also, this diagram assumes we have a mirror-like reflecting surface. That is, the angle between N and L is equal to the angle between N and R. I drew R as a unit vector since we only really care about its direction.

So, right now, we have no idea how to get R.
R = ?

However, we can use some logic to figure it out. There are two vectors we can play with. If you look at N, you can see that if we scale it, we can create a meaningful vector, call it sN. Then we can add another vector from sN to R.

What we are doing here is exploiting the parallelogram law of vectors. Our parallelogram runs from the origin to L to sN to R back to the origin. What is interesting is that this new vector from sN to R has the opposite direction of L. It is -L!

Now our formula says that if we scale N by some amount s and subtract L, we get R.
R = sN - L

Okay, now we need to figure out how much to scale N. To do this, we need to introduce yet another vector from the origin to the center of the parallelogram.

Let’s call this C. If we multiply C by 2, we can get sN (since C is half of the diagonal of the parallelogram).
2C = sN

Replacing sN in our formula with 2C:
R = 2C - L

Figuring out C isn’t difficult, since we can use vector projection. If you are unsure about how vector projection works, watch this KhanAcademy video.

If we replace C with our projection work, the formula starts to look like something! It says we need to project L onto N (to get C), scale it by two then subtract L to get R!
R = 2((\frac{N \cdot L}{|N|^2})N) - L

However, this can be simplified since N is normalized and getting the magnitude of a normalized vector yields 1.
R = 2(N \cdot L)N - L

Woohoo! We just figured out vector reflection geometrically, fun!

I hope this has been of some help to anyone trying to figure out where the vector reflection formula comes from. It has been frustrating piecing it all together and challenging trying to explain it. Let me know if this post gave you any ‘Ah-ha’ moments (:

Reflections on Hackanooga 2012 September 19, 2012

Posted by Andor Salga in Hackanooga, Kinect, point cloud.
add a comment

This past weekend, I had the opportunity to participate in Hackanooga, a 48-hour hackathon in Chattanooga, Tennessee. The event was hosted by Mozilla and US Ignite with the purpose of encouraging and bringing together programmers, designers, and entrepreneurs to hack together demos that make use of Chattanooga’s gigabit network.

We only had two days to hack something together, so I wanted to make my goals simple and realistic. My plan was to set up a server on the gig network, upload dynamic point cloud data to the server and then stream the data back in real-time. In case I got that working, I wanted to try to avoid disk access altogether and stream the data from a Kinect and render it using WebGL.

Before the hacking began, anyone who wasn’t assigned a project had the chance to join one. I was surprised that several developers from the University of Tennessee at Chattanooga approached my table to hear more about my project. We quickly formed a group of four and dove right into setting up the server, editing config files, checking out the super-fast gig connection, and making sure our browsers supported WebGL. After a lot of frustration we finally managed to ‘stream’ the dynamic point cloud from the server. Although the data seemed to stream, it took a while to kick in on the client side. It was almost as if the files were being cached on the server before being sent over the wire. This is still something I need to investigate.

Once we had that working, we moved on to the Kinect. We created a c++ server that read the Kinect data and transferred that data to a node server via sockets. The node server then sent the data off to any client connections. My browser was one of the clients rendering the data, however, the rendered point clouds were a mess. One of the applications we wrote had a bug that we didn’t manage to fix in time ): Even though we didn’t manage to hammer out all the issues, I still had a lot of fun working with a team of developers.

Rumor has it that Hackanooga may become a yearly event. I’m already excited (:

Excuses, excuses… June 13, 2012

Posted by Andor Salga in Personal Development.
add a comment

Yesterday, my legs hurt like hell. It was my second day running and after my run, I had trouble with stairs and even walking. Moving my legs in any way was painful.

I began running as a form of mental exercise, not a physical one. I wanted to practice getting past the mental barriers I create in my life. I figured, if I can overcome the pain, the doubt and fear of a physical exercise, I can get past any mental barriers too.

Today was the third day and things were different. I slept in to recover and I thought my legs would be better by morning, but they were in a worse shape than ever. Getting out of bed was hard and walking was slow. I had to go downstairs one stair at a time. I looked my window and saw people, cars—I was too late. The streets were busy, it was cold, I didn’t want to cause any damage to my muscles, yesterday I had a beer, I’m tired, I have to work—these were my excuses. Look inside yourself and understand your excuses are just as stupid and these stupid excuses are holding you back from your potential.

I remembered this was a mental exercise. The whole point was to break through any mental barriers. I got honest with myself and realized my excuses were ridiculous. If I didn’t get outside, I could rest my legs, yes, but mentally it would be worst thing I could do for myself.

I asked myself, “What if?”. What if I did run today? Would my legs break? Probably not. But more importantly, how would I feel if I could do it? I would have overcome a huge obstacle. I would cry in pain, but hell, I would feel incredible mentally. That was enough for me. I put on my shoes and I was out the door. My run was about 3KM total. By the end of it, I had snot running down my nose and I was gasping for air. It was ugly, but I loved it. I had broken past my mental and physical barrier.

Tomorrow holds a new challenge, but I believe I have the courage to take it on.

Get honest with yourself today, what are your excuses? Write them down. Are these going to hold you back from your ultimate potential? Will you be on your deathbed telling everyone stories about why you couldn’t fulfill your dreams? Or will you tell everyone you managed to fulfill your ambitions even in spite of all your many challenges?

Porting Crayon Physics to the Web June 12, 2012

Posted by Andor Salga in Game Development, Games, Open Source, Processing.js.
add a comment


click that: ^^

It’s been a while since I gave my blog some love, so here goes.

Many semesters ago, I tried to make an HTML5 port of Crayon Physics Deluxe using Processing.js and Box2DJS. I got the rendering right, but I couldn’t get the physics to work correctly. My demo was limited to bounding box collisions.

I recently tried again to see if I could make this thing work. When I hacked together a demo of a ball rolling across the canvas, which was composed of several shapes, I became super-giddy. I knew it was indeed possible to port Crayon Physics to the Web. So I kept hacking on it, and I now have something semi-decent. There are of course a bunch of bugs to work out, but I’m excited to see this project’s potential.

No Comply Game Prototype with Processing January 25, 2012

Posted by Andor Salga in Game Development, Gladius, Open Source, Processing.
1 comment so far

Last week I met with Dave Humphrey and Jon Buckley to discuss creating a game for the HTML5 games week which will take place at Mozilla’s Toronto office in mid-February. The main purpose of this is to drive the development and showcase the Gladius game engine.

At the end of the meeting we decided it would make the most sense to upgrade the No Comply demo by adding interaction—making a small game. Since we only have a few weeks, we decided to keep it simple. Keeping that in mind I created a game specification.

Play Analysis

After putting together the spec, I started on a prototype. I began the prototype by playing and analyzing the game mechanics of Mortal Kombat I. The game is simple enough to emulate given the time frame. I took some notes and concluded characters always appear to be in discrete states. If characters are in a particular state, they may or may not be able to transition into another state. Initially a player is idle. From there, they can transition into a jumping. Once jumping they cannot block, but are allowed to punch an kick. Having understood the basic rules, I drew a diagram to visually represent some states.

The diagram led me to believe I could use the state pattern to keep the game extensible. We can get a basic game working and the design should lend itself to later (painless) modification.

Prototyping the Prototype

Having worked with Processing for some time I knew it would be an ideal tool to construct a prototype of the game. Processing enables developers to get graphical interactive software up and running quickly and easily. The structure is elegant and the language inherits Java’s simple object-oriented syntax.

I began creating a Processing sketch by adding the necessary classes for player states such as moving, jumping and punching. I hooked in keyboard input to allow changing states and rendered text to indicate the current state. I was able to fix most of the bugs by playing around with different key combinations and just looking at the text output.

I eventually added graphical content, but intentionally kept it crappy. I resisted the urge to create attractive assets since they would be replaced with the No Comply sprites anyway. Neglecting the aesthetics was a challenge—if you cringe at the graphics, I succeeded.

Okay, still want to play it?

Next Steps

I omitted collision detection and some simple game logic from the prototype, but I believe it demonstrates we can create a structure good enough for a fighting game for Gladius. Today I’ll be switching gears and starting to hack on Gladius to get COLLADA importing working which we’ll discuss in a Paladin meeting early next week.

Learning to Wield and Forge Gladius January 23, 2012

Posted by Andor Salga in Game Development, Gladius, Open Source.
add a comment

Wielding

For the next few weeks I’ll be working on the open source project Gladius. More specifically, I’ll be working on CubicVR.js, a rendering engine which is one part of the Gladius 3D game engine. Gladius in turn is part of Paladin, Mozilla’s initiative to bring awesome gaming technologies to the Web.

Since Gladius is a game engine, it is essentially a conglomeration of many game-related technologies:

Drawing Stuff

CubicVR.js is a JavaScript port of Charles Cliffe’s CubicVR game engine which is responsible for rendering using WebGL.

Saving Stuff

GameSaver is a project which uses Node.js and BrowserID to save JSON game state on a server. Check out a prototype which uses GameSaver at MyFavoriteBeer.org.

Physics Stuff

For collision detection and response, Gladius uses AMMO.js: a JavaScript port of the C++ Bullet physics engine.

Controlling Stuff

My supervisor, David Humphrey and colleagues in my lab have been working on two important aspects of game input: the Gamepad API and the Mouse Lock API. Check them out, they’re very interesting.

Other Stuff

This list is not exhaustive. As the project matures it is expected other libraries and technologies will be incorporated.

Forging

The release date for Gladius 0.1 is mid February, so I’ll be contributing where I can to help out with the release. Last week I started by making a minor tweak to CubicVR.js. Not much, but it’s a start. I also noticed there wasn’t a way for developers to simply check for AABB-AABB intersection. So I filed the issue and began working on that as well. This week I’ll be working with Bobby Richter to fix COLLADA related bugs which is super exciting. (:

Do you want to get involved? You can find a number of ways to get in contact with developers and start contributing right here.

Follow

Get every new post delivered to your Inbox.