jump to navigation

Shadows in WebGL Part 1 December 12, 2011

Posted by Andor Salga in FSOSS, Open Source, point cloud, webgl, XB PointStream.
4 comments

RUN ME!

In my last blog I wrote about an anaglyph demo I created for my FSOSS presentation in October. It was part of a series of delayed blogs which I only recently had time to write up. So, in this blog I’ll be proceeding with my next fun experiment: Shadows in WebGL.

Shadows are useful since they not only add realism, but can also provide additional visual cues in a scene. Having never implemented any type of shadows, I started by performing some preliminary research and found that there are numerous methods to achieve this effect. Some of the more common techniques include:

  • vertex projection
  • projected planar shadows
  • shadow mapping
  • shadow volumes

I chose vertex projection since it seemed very straightforward. After a few sketches, I got a fairly good grasp of the idea. Given the position for a light and vertex, the shadow cast (for that vertex) will appear at the line intersection between the slope created by those points and the x-intercept. If we had the following values:

  • Light = [4, 4]
  • Vertex = [1, 2]


Our shadow would be drawn at [-2, 0]. Note that the y component is zero and would be equal to zero for all other vertices since we’re concentrating on planar shadows.

At this point, I understood the problem well; I just needed a simple formula to get this result. If you run a search for “vertex projection” and “shadows” you’ll find a snippet of code on GameDev.net which provides the formula for calculating the x and z components of the shadow. But if you actually try it for the x component:

Sx = Vx - \frac{Vy}{Ly} - Lx
Sx = 1 - \frac{2}{4} - 4
Sx =-3.5

It doesn’t work.

When I ran into this, I had to take a step back to think about the problem and review my graphs. I was convinced that I could contrive a working formula that would be just as simple as the one above. So I conducted additional research until I eventually found the point-slope equation of a line.

Point-Slope Equation

The point-slope equation of a line is useful for determining a single point on the slope give the slope and another point on the line. This is exactly the scenario we have!

y - y1 = m(x - x1)

Where:
m – The slope. This is known since we have two given points on the line: the vertex and the light.

[x1, y1] – A known point on the line. In this case: the light.

[x, y] – Another point on the line which we’re trying to figure out: the shadow.

Since the final 3D shadow will lie on the xz-plane, the y components will always be zero. We can therefore remove that variable which gives us:

-y1 = m(x - x1)

Now that the only unknown is x, we can start isolating it by dividing both sides by the slope:
-\frac{y1}{m} = \frac{m(x - x1)}{m}

Which gives us:
-\frac{y1}{m} = x - x1

And after rearranging we get our new formula, but is it sound?
x = x1 - \frac{y1}{m}

If we use the same values as above as a test:
x = 4 - \frac{4}{\frac{2}{3}}
x = -2

It works!

I now had a way to get the x component for the shadow, but what about the z component? What I did so far was create a solution for shadows in 2 dimensions. But if you think about it, both components can be broken down into 2 2D problems. We just need to use the z components for the light and point to get the z component of the shadow.

Shader Shadows

The shader code is a bit verbose, but at the same time, very easy to understand:

void drawShadow(in vec3 l, in vec4 v){
  // Calculate slope.
  float slopeX = (l.y-v.y)/(l.x-v.x);
  float slopeZ = (l.y-v.y)/(l.z-v.z); 

  // Flatten by making all the y components the same.
  v.y = 0.0;
  v.x = l.x - (l.y / slopeX);
  v.z = l.z - (l.y / slopeZ);

  gl_Position = pMatrix * mVMatrix * v;
  frontColor = vec4(0.0, 0.0, 0.0, 1.0);
}

Double Trouble

The technique works, but its major issue is that objects need to be drawn twice. Since I’m using this technique for dense point clouds, it significantly affects performance. The graph below shows the crippling effects of rendering the shadow of a cloud consisting of 1.5 million points—performance is cut is half.

Fortunately, this problem isn’t difficult to address. Since detail is not an important property for shadows, we can simply render the object with a lower level of detail. I had already written a level of detail python script which evenly distributes a cloud between multiple files. This script was used to produce a sparse cloud—about 10% of the original.

Matrix Trick

It turns out that planar shadows can be alternatively rendered using a simple matrix.

void drawShadow(in vec3 l, in vec4 v){
  // Projected planar shadow matrix.
  mat4 sMatrix = mat4 ( l.y,  0.0,  0.0,  0.0, 
                       -l.x,  0.0, -l.z, -1.0,
                        0.0,  0.0,  l.y,  0.0,
                        0.0,  0.0,  0.0,  l.y);

  gl_Position = pMatrix * mVMatrix * sMatrix * v;
  frontColor = vec4(0.0, 0.0, 0.0, 1.0);
}

This method doesn’t offer any performance increase versus vertex projection, but the code is quite terse. More importantly, using a matrix opens up the potential for drawing shadows on arbitrary planes. This is done by modifying all the elements of the above matrix.

Future Work

Sometime in the future I’d like to experiment with implementing shadows for arbitrary planes. After that I can begin investigating other techniques such as shadow mapping and shadow volumes. Exciting! (:

Anaglyph Point Clouds November 18, 2011

Posted by Andor Salga in FSOSS, Open Source, point cloud, webgl, XB PointStream.
4 comments


See me in 3D!

A couple of weeks ago I gave a talk at FSOSS on XB PointStream. For my presentation I wanted to experiment and see what interesting demos I could put together using point clouds. I managed to get a few decent demos complete, but I didn’t have a chance to blog about them at the time. So I’ll be blogging about them piecemeal for the rest of the month.

The first demo I have is an anaglyph rendering. Anaglyphs are one way to give 2D images a depth component. The same object is rendered at two slightly different perspectives using two different colors. Typically red and cyan (blue+green) are used.

The user wears anaglyph glasses, which have filters for both colours. A common standard is to use a red filter for the left eye and a blue filter for the right eye. These filters ensure each eye only sees one of the superimposed perspectives. The mind them merges these two images into a single 3D object.

Method

There are many ways to achieve this effect. One method which involves creating two asymmetric frustums can be found here. However, you can also create the effect by simply rotating or translating the object. It isn’t as accurate, but it’s very easy to implement:

// ps is the instance of XB PointStream
// ctx is the WebGL context

ps.pushMatrix();
// Yaw camera slightly for a different perspective
cam.yaw(0.005);
// Create a lookAt matrix. Apply it to our model view matrix.
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));   
 
// Render the object as cyan by using a colour mask.
ctx.colorMask(0,1,1,1);
ps.render(pointCloud);
ps.popMatrix();
  
// Preserve the colour buffer but clear the depth buffer
// so subsequent points are drawn over the previous points.
ctx.clear(ctx.DEPTH_BUFFER_BIT);

ps.pushMatrix();
// Restore the camera's position for the other perspective.
cam.yaw(-0.005);
ps.multMatrix(M4x4.makeLookAt(cam.pos, V3.add(cam.pos, cam.dir), cam.up));   

// Render the object as red by using a colour mask.
ctx.colorMask(1,0,0,1);
ps.render(pointCloud);
ps.popMatrix();

Future Work

I hacked together the demo just in time for my talk at FSOSS, but I was left wondering how much better the effect would look if I had created two separate frustums instead. For this I would need to expose a frustum() method for the library. I can’t see a reason not to add it considering this is a perfect use case, so filed!

XB Awesome at FSOSS 2011 October 28, 2011

Posted by Andor Salga in Arius3D, FSOSS, Open Source, point cloud, webgl, XB PointStream.
1 comment so far

Tomorrow I’ll be giving a talk at FSOSS 2011 titled “XB PointStream: Rendering Point Clouds with WebGL”. Okay, the name is a bit dry, but I’ve packed a lot of awesome into this talk. If you liked my recent post on turbulent point clouds, you should definitely come to my talk! (:

I’ll be in room S2169 at 2:00, hope to see you there!

Fixing Ro.me’s Turbulent Point Cloud October 24, 2011

Posted by Andor Salga in Open Source, point cloud, webgl, XB PointStream.
6 comments

Run me
Turbulent Point Cloud

A few days ago I noticed the turbulent point cloud demo for ro.me was no longer working in Firefox. Firefox now complains that the array being declared is too large. If you look at the source, you’ll see the entire point cloud is being stuffed into an array, all 6 megabytes of it. Since it no longer works in Firefox, I thought it would be neat to port the demo to XB PointStream to get it working again.

Stealing Some Data…

I looked the source code and copied the array declaration into a empty text file.

var array = [1217,-218,40,1218,-218,37,....];

So I had the data, which was great, but I needed it to be in a format XB PointStream could read. I had to format the vertices to look something like this:

1217	-218	40
1218	-218	37
...

Conversions

Using JavaScript to do the conversion only made sense, but I first had to split up the file which contained my array declaration so Firefox could actually load it. After some manual work, I had 6 files—each with its own smaller literal array.

I then wrote a JavaScript script which loaded each array and dumped the formatted text into a web page. I ran my script and copied the output several times until I had the entire file reconstructed as an .ASC file.

Adding Turbulence

Once I had the point cloud loaded in XB PointStream, I needed to add some turbulence. I could have used the shader which the original demo used, but I found a demo by Paul Lewis which I liked a bit better. The demo isn’t quite as nice as the original, but given more time I could incorporate the original shader as well to make it just as good.

XB PointStream 0.8 Released October 23, 2011

Posted by Andor Salga in point cloud, webgl, XB PointStream.
1 comment so far
Free Cam Visible Human Projection

I lost a bit of momentum for this project while fixing some Processing.js tickets, so I’m releasing the tickets I completed for 0.8 now to keep things moving.

Download

You can download the library on this page which contains links to the minified and full versions.

Change Log

Some of the changes include:

  • Added functions to change projection (perspective and orthographic)
  • Created an ‘Export LOD’ script for Python
  • Added visible human demo
  • Created fake parser for testing
  • Added support to delete point clouds
  • Fixed ASC Exporter to work with Blender 2.59
  • And more fixes…

House of Cards WebGL Demo Source September 2, 2011

Posted by Andor Salga in Open Source, point cloud, webgl, XB PointStream.
2 comments

On Wednesday I posted a video on YouTube of Firefox rendering Radiohead’s “House of Cards” point cloud data in WebGL. I’m now releasing the code for anyone to play with RIGHT HERE. If you download it, make sure to read the README file!

I tested the demo on Chromium and found that it didn’t work, so I’ll be debugging that over the weekend. If you find any other issues with the code or instructions or if you make a neat visualization, let me know!

Real-time WebGL Rendering of House of Cards August 31, 2011

Posted by Andor Salga in Open Source, point cloud, webgl, XB PointStream.
3 comments

Watch the Video

I was reading over the WebGL around the net roundup this week when I saw Mikko Haapoja’s rendering of a frame of Radiohead’s House of Cards. I thought this was neat and wondered if I could render the frames in real-time using XB PointStream.

CSV Parser

First I downloaded the House of Cards data and saw it was in CSV format. XB PointStream already has the architecture setup for user-defined parsers, so I was able to write one without changing the library itself.

User-defined Shader

To make things interesting I wrote a simple shader which changes the positions of the points and colors while the video plays. Again, I didn’t need to change the library since user-defined shaders are supported as well.

Performance Issues

When I first began rendering the video, I was using a MacBookPro 3.1 (2Ghz, 2GB RAM, GeForce 8600M GT 128MB), but Firefox began chugging after about 400 frames. Luckily my supervisor (Cathy Leung) saved me by giving me a new MBP 8.2 (2GHz, 8GB RAM, AMD Radeon HD 6490M 256MB). With this new system I was able to render it in real-time without any major issues.

There are 2100 frames of Thom Yorke singing which totals 880MB, so you can’t stream it online :c However, I’ll place all my work on Github if you’d like to tinker with it. Keep an eye on my blog when I make it available.

LOD With XB PointStream August 27, 2011

Posted by Andor Salga in Open Source, point cloud, Python, webgl, XB PointStream.
1 comment so far

Run me

A simple way to increase performance when rendering point clouds is using levels of detail (LOD). If the camera is far from an object, a lower detailed version of that object can be rendered without much loss of visual detail. As the camera moves closer, higher fidelity versions can be drawn.

A while ago I thought about adding this functionality to XB PointStream and soon realized that the library already supports it! The library can load different point clouds in the same canvas, which allows users to split a cloud into a series of files and conditionally render them. When I had this idea I was too busy with other work, so I had to put it off.

Yesterday I finally sat down and began working out the details to get a demo up and running. I needed two things. First, I needed to evenly distribute the points in a cloud. All of the clouds in my repository have been scanned linearly or in blocks, which doesn’t lend itself well for LOD purposes. For LOD, each cloud needs to represent a coarse level version of the entire object. Second, I needed to split up the cloud into several files.

I decided to start with a simple ASCII point cloud format, ASC. The file is organize something like this:

1.13 6.86 7.81 0 128 255
7.27 9.59 7.29 0 128 255
...
...

Using Some Python

I don’t know Python, but I knew it would be a good choice for this task. My plan was to load the input file into an array, randomly select indices from the array and write them out to the output file.

Soon after I got to work on writing my script, I saw there was a shuffle() method for arrays. This saved me quite a bit of work, so I was happy. I then hacked together the rest of the script. If you’re a Python developer, let me know if there are ways I can fix up the code.

"""
Andor Salga

This script will take an ASC file, evenly distribute the
points and separate the cloud into a series of files.
"""

import random
import sys

#
# Usage: python lod.py pointCloud.asc 4
if (len(sys.argv) < 4):
  print "Usage: python lod.py pointcloud.asc outFileName [numLevels]\n"
else:
  inFileName = sys.argv[1];
  outBaseFileName = sys.argv[2];

  arr = []
  file = open(inFileName)
  while 1:
    line = file.readline()
    arr.append(line)
    if not line: break 
  file.close()

  random.shuffle(arr);

  # Find out how many points we are going to have per
  # file. Don't worry about rounding issues. We will simply
  # append the remaining points to the last cloud.
  numFiles = int(sys.argv[3])
  pointsPerFile = len(arr)/numFiles

  nextFile = 0
  outFilename = outBaseFileName + "_0.asc"

  FILE = open(outFilename, "w")

  line = 0
  
  for item in arr:
    FILE.write(str(item)[0 : -1] + "\n")
    if(line > 0 and (line % pointsPerFile == 0 and nextFile+1 != numFiles )):
      FILE.close()
      nextFile += 1
      outFilename = outBaseFileName + "_" + str(nextFile) + ".asc"
      FILE = open(outFilename, "w")
    line += 1  
  FILE.close()

I tested this first with a million points and didn’t see much performance gain. This was a bit disappointing and suggests that there are other bottlenecks in the library. I decided instead to try the largest file I have, the visible human, which is about 3.5 million points. I fed the cloud into my script and split it up into 10 files. When testing this I got a reasonable FPS gain. When rendering all the clouds I get ~20FPS and when I zoom out and render only 1 cloud, I get ~60FPS.

In Conclusion

If you’re going to render a large data set with XB PointStream, consider using my script to split it up into many files to increase your script’s performance.

Using WebGL readPixels? Turn on preserveDrawingBuffer August 1, 2011

Posted by Andor Salga in JavaScript, Open Source, point cloud, webgl, XB PointStream.
13 comments

Since I’ve already written a few blogs about WebGL’s readPixels and because developers seem to find my page mostly by this keyword, I decided to help clarify a recent issue I found.

In some of my WebGL scripts I have a feature which allows users to convert 3D images to 2D (see here). The script does this simply by making a call to readPixels.

This used to work until browsers (namely WebKit and Chrome) began implementing the preserveDrawingBuffer option. This is an option set when the WebGL context is acquired and as its name suggests it preserves drawing buffers between frames.

What this means is if preserveDrawingBuffer is false/off (which it is by default) it will not save the depth and color buffers after each draw call. Trying to call readPixels in this state will result in an array of zero’ed out data.

If you’re planning on calling readPixels, you’ll need to turn on this option when you get your WebGL context.

var context = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});

The WebGL spec states that this may cause a performance hit on some machines so only enable it if you really need to.

WebGL Visible Human Project July 30, 2011

Posted by Andor Salga in Open Source, point cloud, webgl, XB PointStream.
7 comments

Clicky!

A couple of weeks ago it was my turn to demonstrate to the other researchers at the Seneca CDOT lab what I had been working on. I gave them a quick presentation of XB PointStream, a WebGL library for rendering point clouds in web pages.

After watching my presentation, Mike Hoye expressed interest in extracting data from the visible human project video and feeding it into the library—creating an interactive version of the video in 3D. The idea seemed fascinating and I anticipated seeing the results. It wasn’t long before Mike casually mentioned he finished everything. He showed me his demo and I was extremely impressed.

Slider Slicer

After playing around with the demo for a bit I decided it needed at least one change: a slider to slice the subject just like the video. I added a jQuery UI slider which now allows users to create cross sections of the point cloud. I then made some changes to the camera so it always focuses on the section which has been sliced ‘out’.

Something Completely Different

This demo is interesting because it’s different in two ways. First, the point cloud has contents. All my other clouds are actually hollow, while this one has a bunch of ‘meat’ within it.

Second, the file size. Mike mentioned the data set had been scaled down from several gigabytes. This sparked my interest since none of my current point clouds surpass 50MB. If I manage to solve the problem of dynamically loading sections of point cloud files, I could start experimenting with loading the entire 10GB cloud.

Making it Faster

The demo is sluggish right now since it stupidly renders 3.5 million points/frame. However, this can be fixed. Because the user clipping planes work on the Y-axis and because the cloud loads along the Y-axis, it would be possible to do coarse-level culling on sections of the cloud if it was pre-cut along this axis. For example, if I had 5 or so cross section ‘chunks’ of the cloud and one of the clipping planes passed the bounds of a cloud, that section could be culled from rendering entirely. When I have time (ha,ha) I’ll get around to doing that.

Special Thanks

A huge thanks to Mike Hoye who on his own time performed some magic and got the data out of the video. This demo would not have been possible without him. I’m looking forward to some higher fidelity point clouds in the future!

Follow

Get every new post delivered to your Inbox.