WebGL Visible Human Project July 30, 2011Posted by Andor Salga in Open Source, point cloud, webgl, XB PointStream.
A couple of weeks ago it was my turn to demonstrate to the other researchers at the Seneca CDOT lab what I had been working on. I gave them a quick presentation of XB PointStream, a WebGL library for rendering point clouds in web pages.
After watching my presentation, Mike Hoye expressed interest in extracting data from the visible human project video and feeding it into the library—creating an interactive version of the video in 3D. The idea seemed fascinating and I anticipated seeing the results. It wasn’t long before Mike casually mentioned he finished everything. He showed me his demo and I was extremely impressed.
After playing around with the demo for a bit I decided it needed at least one change: a slider to slice the subject just like the video. I added a jQuery UI slider which now allows users to create cross sections of the point cloud. I then made some changes to the camera so it always focuses on the section which has been sliced ‘out’.
Something Completely Different
This demo is interesting because it’s different in two ways. First, the point cloud has contents. All my other clouds are actually hollow, while this one has a bunch of ‘meat’ within it.
Second, the file size. Mike mentioned the data set had been scaled down from several gigabytes. This sparked my interest since none of my current point clouds surpass 50MB. If I manage to solve the problem of dynamically loading sections of point cloud files, I could start experimenting with loading the entire 10GB cloud.
Making it Faster
The demo is sluggish right now since it stupidly renders 3.5 million points/frame. However, this can be fixed. Because the user clipping planes work on the Y-axis and because the cloud loads along the Y-axis, it would be possible to do coarse-level culling on sections of the cloud if it was pre-cut along this axis. For example, if I had 5 or so cross section ‘chunks’ of the cloud and one of the clipping planes passed the bounds of a cloud, that section could be culled from rendering entirely. When I have time (ha,ha) I’ll get around to doing that.
A huge thanks to Mike Hoye who on his own time performed some magic and got the data out of the video. This demo would not have been possible without him. I’m looking forward to some higher fidelity point clouds in the future!