jump to navigation

Green Screen September 30, 2009

Posted by Andor Salga in Open Source, Processing.js, webgl.
5 comments

I might as well start with this video. Don’t bother reading the rest of this blog, just watch the video. Dave, thanks for sharing it. Actually, if he hadn’t sent me the link, this blog would have been much shorter.

I’ll be discussing my current progress with the Processing.js project. My goals for the Processing.js project include completing some of the outstanding tasks which involve 3D rendering. For my 0.1 release, I’ll be updating the C3DL to use the WebGL graphics context instead of Canvas3D. This might seem like it’s unrelated, so I’ll briefly explain.

C3DL was designed to provide web developers with a simple interface to allow them to place 3D content into the web. Developers would not be required to understand OpenGL, just our library. However, C3DL currently uses the Canvas3D plug-in. If a user wants to see demos run in the browser, they’ll need to install the plug-in. Now WebGL was recently added to the Mozilla trunk and will soon be included in a future release. Therefore, we need to update the library so demos will run without prompting the user to install the plug-in. So one reason for doing this is to keep C3DL up to date.

Yesterday I wrote a blog about working with large code bases. I mentioned one important task involves making sure the project builds before changing anything. Well, what I’m doing here is a bit like that. I’m making sure:

  • I understand any difference between Canvas3D and WebGL
  • I can be sure everything in WebGL is stable
  • It gives me a good starting point
  • It provides me with an environment to experiment

Instead of starting from scratch and adding 3D stuff to Processing.js. I’ll first get a nice handle of WebGL. Once C3DL is in working order, I’ll start porting the code to Processing.js.

I have Firefox 3.0.13, 3.5.3 and 3.7a1pre. I’ll be using the two latest versions to get this going. I use the Canvas3D plug-in with 3.5. Firefox 3.7 is endowed with the awesomeness of WebGL, so it should be enough to run some of my C3DL demos. Therefore the only line which needs changing is the line which acquires the rendering context. (Yeah, right)

Before I started, I took another look at Vlad’s spore creature viewer. I noticed he was using VBOs. This is interesting because when I tried using them while working on C3DL, I had no performance gain and I should have had quite a bit. Maybe there was an issue and it was resolved? A speed improvement would be great. That way, the world can enjoy playing NES games online.

I opened up the C3DL rendering .js file and updated how the library acquires a rendering context.
try
{
  // Does the user have the canvas3D plugin?
  glCanvas3D = cvs.getContext('moz-glweb20');
}
catch (err)
{
  glCanvas3D = null;
}
if(!glCanvas3D)
{
  try
  {
    // Does the user have a browser that supports WebGL?
    // If so, use that instead.
    glCanvas3D = cvs.getContext('moz-webgl');
  }
  catch (err)
  {
    glCanvas3D = null;
  }
}

I started Minefield (3.7) and tried to open a demo. I immediately received a C3DL error. This is a bit odd because it returns true on 3.5 and false on 3.7.
if(effectTemplate instanceof c3dl.EffectTemplate){...}
I decided to make the conditional pass regardless, since I knew for a fact effectTemplate WAS an instaceof c3dl.EffectTemplate. At the same time I thought it would be nice to have a JavaScript debugger in case I need it. Unfortunately, I didn’t find a compatible version of Firebug for 3.7 and I didn’t know how to use Venkman after I had installed it. I was a bit impatient so I just used printf equivalents.
c3dl.debug.logInfo("I'm running");

After forcing the conditional to pass, I tried my code again and got the following error in the Firefox console:
Error: uncaught exception: [Exception... "Not enough arguments [nsICanvasRenderingContextWebGL.uniformMatrix4fv]" nsresult: "0x80570001 (NS_ERROR_XPC_NOT_ENOUGH_ARGS)" location: "JS frame :: file:///Users/andor/Documents/Canvas3D/canvas3dapi/renderer/rendereropengles20.js :: anonymous :: line 985" data: no]
It looked like it was blowing up on this line:
glCanvas3D.uniformMatrix4fv(varLocation, matrix);
I checked out the WebGL IDL file and it had the function defined as such:
void uniformMatrix4fv (in GLint location, in GLboolean transpose, in nsICanvasArray value);
It seemed some things have been changed since Canvas3D. I only needed to add another parameter, but decided to comment out the line and see what other functions changed.

I got this exception next:
Error: uncaught exception: [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsICanvasRenderingContextWebGL.vertexAttribPointer]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: file:///Users/andor/Documents/Canvas3D/canvas3dapi/shaders/model/standard/std_callback.js :: anonymous :: line 66" data: no]

My code:
glCanvas3D.vertexAttribPointer(normalAttribLoc, 3, glCanvas3D.FLOAT, false, 0, currColl.getNormals());

No data exists in currColl.getNormals()? I know it still works with 3.5, Commented.

Next I got this one:
Error: uncaught exception: [Exception... "Could not convert JavaScript argument arg 1 [nsICanvasRenderingContextWebGL.bindTexture]" nsresult: "0x80570009 (NS_ERROR_XPC_BAD_CONVERT_JS)" location: "JS frame :: file:///Users/andor/Documents/Canvas3D/canvas3dapi/shaders/model/standard/std_callback.js :: anonymous :: line 110" data: no]
It was happening on this line:
glCanvas3D.bindTexture(glCanvas3D.TEXTURE_2D,-1);

Here I bind to an invalid texture object in case an object isn’t textured. This prevents the last active texture from being used by this object. Commented.

Next one.
Error: uncaught exception: [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsICanvasRenderingContextWebGL.vertexAttribPointer]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: file:///Users/andor/Documents/Canvas3D/canvas3dapi/renderer/rendereropengles20.js :: anonymous :: line 950" data: no]

The code:
this.setVertexAttribArray = function(shader, varName, size, array)
{
    let attribLoc = glCanvas3D.getAttribLocation(shader, varName);
    if(attribLoc != c3dl.const.SHADER_VAR_NOT_FOUND)
    {
        // This line is blowing up.
        // glCanvas3D.vertexAttribPointer(attribLoc, size, glCanvas3D.FLOAT, false, 0, array);
        glCanvas3D.enableVertexAttribArray(attribLoc);
    }
    else
    {
        c3dl.debug.logError('Attribute variable "' + varName + '" not found in shader with ID = ' + shader);
    }
}

This is a wrapper I wrote to make the C3DL code a bit cleaner, since the third, fourth and fifth parameters always had to be the same. Commented.

The next line wasn’t really a surprise. There are enough changes to the library now that this is expected.
No VBO bound to index 0 (or it's been deleted)!
Error: uncaught exception: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsICanvasRenderingContextWebGL.drawArrays]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: file:///Users/andor/Documents/Canvas3D/canvas3dapi/shaders/model/standard/std_callback.js :: anonymous :: line 120" data: no]

My line:
glCanvas3D.drawArrays(renderer.getFillMode(), 0, currColl.getVertices().length/3);

Commented, next.
No VBO bound to index 0 (or it's been deleted)!
Error: uncaught exception: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsICanvasRenderingContextWebGL.drawArrays]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: file:///Users/andor/Documents/Canvas3D/canvas3dapi/renderer/rendereropengles20.js :: anonymous :: line 928" data: no]

This is because I’m running a mocap demo and I’m rendering points.
glCanvas3D.drawArrays(c3dl.const.FILL,0, (c3dl.const.POINT_VERTICES.length)/3);
Again, I commented the line and finally got an interesting one:
Error: glCanvas3D.swapBuffers is not a function
Source File: file:///Users/andor/Documents/Canvas3D/canvas3dapi/renderer/rendereropengles20.js
Line: 222

I first thought the name might have changed to SwapBuffers (uppercase ‘S’). I looked up the API and saw it was no longer present. This had to be a mistake. I need to tell OpenGL when the backbuffer should be swapped with the front buffer. I commented it and that’s when the errors ceased. I saw the FPS changing on my page, but the context was white. I should have been blue.

Snapshot 2009-09-29 23-09-49

I was pretty excited since I was sure that now I had to be getting a graphics context, but the context should have been colored blue. Was it really working? Something was wrong. I sat thinking about it and thought no rendering was happening because I commented out the swapBuffers() call. It’s probably drawing to the backbuffer and not swapping to the front. I went back to Vlad’s spore creature viewer demo and poked around wondering how he was doing it. I saw there weren’t any call to swapBuffers. The only lines that gave a hint were:
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLES, 0, numVertexPoints);

He’s clearing the color and depth buffer in one call and drawing what he needs. I couldn’t argue with the code since it was working and mine wasn’t. I was a bit stuck. I turned to my IRC client and pinged joe, mark, dave. Bas and mstange also got involved, trying to help out too. Thanks guys! So I was told that the swapping is probably happening automatically. I decided this is something I can accept since Firefox is now handling the rendering. It uses double buffering to display the content, probably does the same with the canvas. Still, I couldn’t get the canvas to change color.

I felt like giving up, I tried to think it through, I tried looking up things on MXR, I tried asking developers on IRC. Then I thought, what if I try to set the clear color and do a clear right after I get the graphics context? No other C3DL code would execute in between. I thought it might be worth a try…
glCanvas3D = cvs.getContext('moz-webgl');
glCanvas3D.clearColor(0,1,0,1);
glCanvas3D.clear(glCanvas3D.COLOR_BUFFER_BIT);

It works!

Snapshot 2009-09-29 22-49-07

I admit, my demo is not quite as impressive as Vlad’s demo, but hey, I have something!

Next, I’ll have to address all those functions which now have different signatures. Maybe then I can get a vertex to render.

1,000,000 SLOC September 29, 2009

Posted by Andor Salga in Open Source.
4 comments

In our last lecture in the DPS909 class, Dave taught us how to be lost productively. The lecture focused on reading and modifying large code bases. We were then instructed to read a chapter of Code Reading: The Open Source Perspective which related to the same topic. The reading wasn’t too long, but it had some great tips. I’ve amalgamated some of the main points I’ve learned.

The Life of a Developer

So, you’re given a task of making a patch in an application with makefiles with more lines than you’ve ever written in your life. The application has limited or no documentation. How do you approach the situation?

Documentation

Some documentation will exist if you’re lucky. Make sure you gather any possible resources you can get your hands on. In our case it would include:

Be prepared to ask questions in IRC. Of course asking questions must be done properly. Before asking a question…

  • Do at least some research beforehand
  • Provide some background on your particular situation
  • Don’t hammer the other users on the channel, they may be busy
  • Don’t ask four word questions “How does x work?”
  • Don’t ask “Can I ask a question?” The answer will be “yes”.
  • Don’t get sore if someone answers you with one word answers.
  • Stay on the channel once your question was answered. You might get a better one
  • Use proper grammar (to the best of your ability)
  • Use proper spelling (if in doubt, use a spellchecker)
  • Use English, not l33t speak (it can be very irritating)
  • Be prepared to search new terms introduced in an answer
  • Always thank anyone who was willing to help
  • Pass on the knowledge, don’t say “Oh, just ask Dave or Chris

A Working Build

The next one may seem a bit obvious, but make sure it isn’t overlooked.Make sure it builds properly before you change anything.It’s extremely important you remember not to change a thing in the code before you know it builds. Once you know everything is okay, you can start writing your patch. You’ll know any errors on building will be the result of your own code. I think this will mostly be an issue when a small change is made on the system and a dash of overconfidence is present. It’s difficult to derive an example, but I know it has happened to me before. It’s obvious, but can still be overlooked.

Modeling

Even though you think someone’s code is garbage because it’s undocumented and full of kluges, don’t knock it right away. When you’re required to write a patch, try to model your code based on the code already present. Here’s an example: You open some file and notice everyone seems to be using PRInt32. You think an int is good enough since everything builds correctly. You finish your patch and try to submit. But you’re patch won’t be approved because you haven’t taken into consideration developers working on the project may know something you don’t.

Learn GREP

Mark Fernandes is a huge GREP fan. And for good reason. I’ve had a number of classes where he demonstrated it’s power. Just look at the man page! GREP is a wonderful tool which can help you find what you’re looking for fast, if you know how to use it.

Use printf

Just because printf was one of the first functions you learned dosen’t mean it’s useless. printf is simple and easy to use and can be great when doing a trace or checking if a piece of code is being called.

Less Is More

This one initially seemed to be counter-productive to me, but makes perfect sense. Don’t try to understand all 10,000 lines of an application before trying to make a change. Firstly, you’ll never finish and secondly you’ll become overwhelmed within the first half hour. Instead try to understand the absolute minimum required to make the change. Obviously don’t go out of your way to ignore other things, but don’t look into the networking code if you’re making changes to the jpeg routines.

Not As I Do

Once you’ve made you’re change, don’t screw over the next developer by leaving your code undocumented as well. Even though you struggled and suffered because someone didn’t document their code, don’t put that same burden on a future developer. Document your code, add it to a readme, write a blog, write a wiki page or update the issue on bugzilla. Others will be thankful you did.

I’m sure I left out other great tips, feel free to comment to add your own!

DPS909 – week 4 – Initial Project Plan September 27, 2009

Posted by Andor Salga in Open Source, Processing.js.
4 comments

This blog post marks the official start of my work on the Processing.js project.

What Is Processing.js?

Before getting into the details of Processing.js, Processing must first be discussed. Processing is a Java-like language developed by Ben Fry and Casey Reas to allow designers and animators to quickly and easily learn scripting and render animations. The project also encompass an environment, specifically the PDE, the Processing Development Environment. The PDE is simple and very easy to use. It offers only necessary features and makes development relatively painless. The combination of the simple language and environment create a perfect gateway for introduction to programming. Some high schools and universities are actually using the language to teach introductory programming. Processing has been used in commercial applications and is currently supported by volunteers.

As the Web matured, it became evident to John Resig that JavaScript was the language of the Web, not Java. Java will always remain a plug-in, but JavaScript will be part of the browser. Seeing the potential of Processing, Resig decided to bring that functionality to the Web without the need for a plug-in such as Java. Indeed, Processing.js is targeted to users who don’t want their animations to be dependent on plug-ins such as Java, Flash and what’s that other one? Silver-something? Silverware?.

With all the elements in place (HTML5, the canvas tag, and JavaScript) Resig decided to port the Processing language to where he felt it belongs. The language itself was not changed, but a new JavaScript library was developed to parse and render processing code.

One of the most powerful aspects of Processing.js is that it runs on all leading browsers including FireFox, Opera, Safari and even IE (with the help of ExplorerCanvas). That’s quite significant seeing how any one of these can be written quickly and rendered on a friend’s system without concern about browser compatibility or absent plug-ins. That, is really cool.

Where We Come In

As it stands, many of the 2D functionality Processing offers has been emulated with Processing.js. Albeit, there are still some empty functions which need implementation as well as missing 3D functionality which needs to be completed. This is where we come in. We have been given the opportunity to work with the library to add/fix/modify existing Processing.js code.

I decided to join this project because I have a strong interest in rendering and graphics. I have some experience with rendering on the Web from working on C3DL and hope some of what I learned can be integrated into this project. Conversely, I hope what I learn from working on Processing.js can later be incorporated back into C3DL.

Since this project has eight students, staying connected, collaborating and keeping everything organized is that much more important. We will stay connected via IRC on the #seneca and #processing.js channels. We will collaborate and stay organized using our blogs as well as Processing.js project page on Zenit and the project task list on MDC. If face to face meetings are required, we will book the CDOT T1042 meeting room.

How Will I Slay This Dragon?

With duct tape. I’ll try to put the scant good programming practices aside and try the duct tape approach. Get it done. Get it out the door. Blog about the success.

Release 0.1

Dave suggested we each pick two tasks which require implementation and complete them for 0.1. He suggested one task should be simple just to get some practice and get our feet wet. I scanned the last few tasks and saw mag() among some math functions, I looked up the Processing reference pages and saw it’s a magnitude function which returns the length of a 2D or 3D vector. I figured I could handle that. Next I chose the ambientLight() funciton since that’s quite simple too. I soon realized it would be difficult to test that function since there is no means to even draw a 3D object! After talking to Dave, he told me I would have to add a couple of categories and complete those first. So, for 0.1, I have signed up to first port the existing C3DL code to use WebGL instead of the Canvas3D plug-in. Since it’s been a while since I have worked on the setup and initialization part of C3DL, doing this will refresh my memory. It will also give me a chance to jot down the steps required to get a 3D context ready for rendering.

Release 0.2 and Onward

I know my plans will drastically change, but here are my goals for the next few releases:

For 0.2, I plan to get some 3D objects rendered. Rendering a cube is more complex than rendering a point, but should be easy enough a task to liken my success. If successful, I will take on other important tasks such as getting the matrix stack working. I will accomplish this by making use of the C3DL code, referring to OpenGL books, bugging people on IRC, blogging about problems I encounter and reading the variety of blog posts online. Using the duct tape approach, I will know I’m successful when what I wrote is mostly working without crashing. Then I can post a blog about it and move on.

0.3 onward might include implementing lights. There are a three standard lights in the realm of 3D graphics. These are directional lights, point lights and spotlights. Each of these have components including diffuse, ambient and specular colors. Implementing this is almost a whole other project. I know this because it involved so much work when I implemented it in C3DL. Then again, I may be able to extract large portions of the code from C3DL. If I manage to reduce coding time because of this, I’ll adjust and try to take on more tasks.

There is something very exciting about working with technology which has zero published books. On one hand it’s frustrating because you must rely on blog posts and IRC which isn’t as reliable. On the other hand, it’s very exciting! I am very thankful I am able to work on this project as it truly is innovative and cutting edge technology. I love Seneca.

There is a whole lot of learning which will be required of me to accomplish my goals. Most of which will likely involve reading books, man pages, blogs, wikis and talking to IRC folk. I figure I will be spending my time on most on these pages :

Collaboration and Contribution

Because there are eight students writing blogs about Processing.js, it may catch the interest of someone who has some spare time who might be interested in helping. I think as long as we give credit to anyone helping us and

One requirement in this course is that we must collaborate with other students and help them with some task. Finding another student who needs help shouldn’t be a problem. I think we’ll all run into problems soon enough. In our last group meeting, we decided five of the eight of us will be working on 2D functions and the rest will work on 3D functions. So helping another student with 2D content will be a nice break from 3D when I get stuck on bug.

Possible Resistance

My main concerns I have going into this project is how I will manage to get everything completed on time. I will have to stay organized and diligent as I work on this project. I do have an affinity for the project, so just knowing I can work on it will be enough incentive to sit down and code. Albeit, with new technology, there are always bugs. I already got an e-mail a few days ago about problems with the canvas tag behaving incorrectly. I know I will run into bugs and I will do my best not to work in isolation. If I discover a problem or get lost on some task, I will write a blog post about it and hopefully my peers will be able to help me.

My largest fear is taking up this project and not being able to complete my allotted work or being late in my releases. The ORI/CDOT can be a quiet facility, especially at 3AM, so making use of the research area overnight will likely have to be done sometime during the semester. That’s a last case resort. Hopefully careful time management and planning will prevent such a need.

I already began working on 0.1 and should have something ready when the time comes!

DPS909 – week 4 – Building Firefox September 27, 2009

Posted by Andor Salga in Open Source.
2 comments


Readings

Paul Reed’s lecture was a bit hard to follow. He used a lot of jargon with which I’m only vaguely familiar. I did learn a few things, however. I was surprised he answered one question I had for the past few days. I was wondering what some of the differences were between Mozilla’s build and my build. Among other things, self-builds are not official builds and some modules aren’t compiled when a self-build is done. It was also a surprise to see a retro Andrew. Hehehe.

I read the Introduction to Mercurial and found that although the philosophy is a bit different, the commands are somewhat the same to tortoise svn. Looking at the examples, I began remembering some things Dave was talking about a few lectures ago about the changeset identifiers.

Prepping for the Build

To make the build go as smoothly as possible, I decided build on OS X. I’m sure it would be a huge pain to try on Windows and I’m probably too unfamiliar with Linux. So, OS X it is. I began by browsing some of the links on zenit and decided to grab Hg first. I went to the Mac OS X section and followed the link to the download page. I got the 1.3.1 version of Mercurial for OS X 10.5 and installed it. I typed Hg in the terminal and saw that it was working.
c-leungs-macbook-pro:~ andor$ hg
Mercurial Distributed SCM
basic commands:
add add the specified files on the next commit
annotate show changeset information by line for each file
clone make a copy of an existing repository
commit commit the specified files or all outstanding changes
diff diff repository (or selected files)
export dump the header and diffs for one or more changesets
forget forget the specified files on the next commit
init create a new repository in the given directory
log show revision history of entire repository or files
merge merge working directory with another revision
parents show the parents of the working directory or revision
pull pull changes from the specified source
push push changes to the specified destination
remove remove the specified files on the next commit
serve export the repository via HTTP
status show changed files in the working directory
update update working directory
use "hg help" for the full list of commands or "hg -v" for details
I’m familiar with the basic use of svn on Windows, OS X and Linux, but I’ve never used Hg, so I figured it should be interesting to use.

I then went to build instructions page. That led me to the build prerequisites for OS X. The page stated I needed XCode Tools, but I remember I had already installed that a while ago when I was working on C3DL. The instructions then told me to install MacPorts.

So, I went to the MacPorts site and then to the install page. It was at this point I forgot what version of OS X I was running. Was it Snow Leopard, Leopard or Tiger? I clicked the little apple icon in the menu bar and clicked “About this mac’” I have version 10.5 which means Leopard. The MacPorts site also said I needed X11. I was pretty sure I had X11 since it ran everytime I started GIMP.

At this point I had two choices, build MacPorts from source (yeah right) or download the .dmg. There’s no way I want to introduce more complexity and possible failures to this process, so I went with the .dmg file. It seemed as if the installation froze for a while, but it eventually finished.

I went back to the OSX build prerequisites and it said after installing macPorts, I should run the following command to make sure that MacPorts are up to date.
$ sudo port selfupdate
I first refreshed my memory of what exactly sudo does. I then tried running the command:
c-leungs-macbook-pro:~ andor$ sudo port selfupdate
WARNING: Improper use of the sudo command could lead to data loss
or the deletion of important system files. Please double-check your
typing when using sudo. Type "man sudo" for more information.
To proceed, enter your password, or type Ctrl-C to abort.
Password:
sudo: port: command not found

So I guess this warning happens the first time a user tries to run sudo. After I entered my password, I figured I must have made a mistake because I got an error message. A quick copy and paste into a search engine led me to a solution:
export PATH=$PATH:/opt/local/bin
export MANPATH=$MANPATH:/opt/local/share/man
export INFOPATH=$INFOPATH:/opt/local/share/info

This is meant to be placed in .bash_profile, but I was feeling lazy, so I just typed out the lines. I tried the lines again.
$ sudo port selfupdate
$ sudo port sync

It worked! I then used MacPorts to install the necessary packages for building Firefox:
$ sudo port install mercurial libidl autoconf213
The last command probably ran for over a hour. I went to the kitchen to get some grapes then started working on other things. While the packages were downloading and installing, I saw this message scroll up the terminal:
To fully complete your installation and make python 2.6 the default, please run
$ sudo port install python_select
$ sudo python_select python26

These lines are used to make sure python 2.6 is used as default. Maybe using other versions cause errors? Once the MacPorts stuff finished, I ran the python commands and then checked the Hg version.
c-leungs-macbook-pro:~ andor$ hg version
Mercurial Distributed SCM (version 1.3.1)
Copyright (C) 2005-2009 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Finally I could clone the repository.

Cloning the Repository

I needed to make a .mozconfig file in my home directory. Just to be on the safe side….
c-leungs-macbook-pro:~ andor$ cd ~
c-leungs-macbook-pro:~ andor$

Okay good. I was worried there was another ‘andor’ folder. I was using the GUI and I tried to make the .mozconfig file. OS X complained.
You cannot use a name that begins with a dot ".", because these names are reserved for the system. Please choose another name.
Back to the terminal.
$ touch .mozconfig
Problem solved. Kind of. I went back to the GUI, but the file was hidden (obviously). I wasn’t in the mood to start searching for the checkbox to tick which displays hidden files, so…
Back to the terminal.
$ vi .mozconfig
I love vi. I copied and pasted the following into the .mozconfig file which will create a debug build of Firefox.
. $topsrcdir/browser/config/mozconfig
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/obj-ff-dbg
mk_add_options MOZ_MAKE_FLAGS="-s -j4"
ac_add_options --enable-debug
ac_add_options --disable-optimize
ac_add_options --with-macos-sdk=/Developer/SDKs/MacOSX10.5.sdk

Time to get the source! But before I went and did that, I checked to make sure nothing was burning. It looked okay, so I ran this command (at around 5:50PM).
$ hg clone http://hg.mozilla.org/mozilla-central/
40 minutes later, it started updating the working directory and finished soon after.

Time To Build

I started building at 6:44PM.
$ make -f client.mk build Make generated a whole lot of configuration lines and warnings and eventually finished 26 minutes later.

Run Firefox!

I ran the application from the terminal.
$ open obj-ff-dbg/dist/MinefieldDebug.app
The profile manager opened and remembering what Paul Reed said about potential problems with building from trunk, I created a new profile calling it MFD (Minefield Debug).
Snapshot 2009-09-26 19-21-02

It works! Actually, I’m not that surprised. This is why I decided to go with OS X. I’m much less experienced with it, but I knew I wouldn’t run into many problems compared to trying to build on Windows. Next week, I’ll try to find some time to build on Fedora.

IRC Discussions

While I was cloning I asked a few questions in #seneca and had some dicussions with jboston, ted and jhford. Here is one of the conversations which can be useful to other students. I asked what the difference was between a nightly and trunk. Note: my (unnecessary) comments are omitted.
[20:48] jhford nightly happens once a day
[20:49] jhford trunk is per checkin
[20:49] jhford a nightly build is attempted, but only successful ones are uploaded
[20:49] jhford builds != changeset
[20:50] jhford any time a developer pushes a changeset to mozilla-central, we do a build
[20:50] jhford if that build is a success we upload it's artifacts and run tests on it
[20:51] jhford regardless of the outcome of the build, the changeset is kept indefinitely in Mercurial
[20:51] jhford an artifact would be an installer or tarball or collection of object files
[20:51] jhford whatever was built basically

Time For Fun!

I had a browser with WebGL and couldn’t wait to try it out. I went to about:config page and did a search for WebGL. I toggled it to true and went to find a demo. The first one I found was Vlad’s spore creature viewer demo.

Vlad's spore viewer
SWEET! I then tried my luck with C3DL. I wrestled with it for a while, but couldn’t get anything to render, yet….

DPS909 – week 4 – Processing.js September 26, 2009

Posted by Andor Salga in Open Source, Processing.js.
5 comments

cube

Clicky me!

This is just so Dave and Chris don’t think I’ve been idling. Here is a very much stolen piece of Processing.js code based on F1LTER’s disco demo. His is much cooler, but I was just playing around and experimenting.

DPS909 – week 3 – Google alerts September 20, 2009

Posted by Andor Salga in Random.
add a comment

New technology scares me. I sat looking at Google alerts and poked it a few times, not sure what to make of it. I figured it would bomb my inbox and I’ll never be able to recover. I have about 5KB of inbox space because that’s how much Seneca loves me. That’s okay, I love Seneca regardless.

So, I overcame my fear and decided to experiment with this strange new technology. I decided to get updated once a day on…something. What search term will generate something, but still has insignificant presence on the interweb? Ah yes, me.

So, which category do I fit into?
    * monitoring a developing news story
    * keeping current on a competitor or industry
    * getting the latest on a celebrity or event
    * keeping tabs on your favorite sports teams

Hmm, I like to think of myself as a celebrity, let’s go with that. Why in the world did you just read this? I’m just practicing my blog-fu skills.

I’ll see how Google alerts treats me this week and then try some real searches on “processing.js”? “canvas tag”? “silverlight”?

DPS909 – week 3 – Processing September 20, 2009

Posted by Andor Salga in Open Source, Processing.
3 comments

Processing

Here is my applet which was my first stab at Processing. To run the applet you’ll have to agree to give complete and unrestricted access to your system. I decided to give Processing a shot since Dave suggested I check out the original “Processing” before moving on to Processing.js. I think Java is great for some things, but I have to admit, Processing.js is much more sexy. The Processing.js scripts are quick and seem very much part of the browser, whereas the Processing applets spawn annoying prompts and feel a bit bulky. But still, both are pretty cool.

Getting a decent sketch running didn’t take too long at all. The system is similar to a state machine, so I felt somewhat comfortable in the environment. I have some Java programming experience, but it felt like an altogether different language. I also love the minimalistic PDE which makes up for some of Java’s loading time.

I went through the first learning section, then dove right into the API, experimenting with different commands and methods. I usually like low-level programming, but in this case it was nice not to deal with graphics contexts and get results quickly.

I did encounter some issues, such as not being able to render points. I cheated and just drew the stars using dots.

DPS909 Week 2 – 3 September 17, 2009

Posted by Andor Salga in Open Source.
add a comment

This week Dave Humphrey and Chris Tyler blew our minds with this video. That probably got every student’s geek juices flowing and I bet they were all thinking ‘Why didn’t I think of that?’.

The next piece of media I watched was a presentation by Mike Beltzner at Seneca. Mike is employed by Mozilla and works on user experience. He talked about the Mozilla community and the many ways to get involved. One thing I found neat was when he said the project can only be nudged and there was “no one voice” to command its direction. It reminded me of how the project is a living organism, changing and evolving from ideas and input by thousands of developers.

In regards to the Ars Technica column, I’m just glad Seneca has courses like DPS909 and DPS911, which among other things, merges the worlds of school and work.

Lastly, I watched a presentation given by Dave at Stanford University. You can watch it here. I highly recommend it to anyone interested in getting involved in open source. I found it encouraging and thought how right Dave was when he said you just have to get started.

I remember working on the C3DL (seriously, have you checked this out yet?) project. I had to get the library to render on Linux. This involved finding the graphics drivers, installing them, creating profiles in Firefox, creating all sorts of links and funky hacking. My Linux skills were pretty much limited to what I had learned in UNX122 when that was still a course. But I had some help on IRC to get things running. I did fail trying to get all the drivers installed and Firefox configured the first few times, but I learned a lot and was helped along the way and eventually got the thing to work. As I go further into this course, I suppose I should remind myself I just need to jump right in and ask questions. I’m going quote Dave, because I find his statement so true:

“You have to resist the temptation to think you know what you’re doing before you start”.

Dave re-iterates every class we aren’t ready to tackle the projects which will be assigned to us. I get excited imagining how much can be learned when facing such an enormous challenge. Dave’s quote reminds me of a quote from The Catheredral and the Bazaar:

“You often don’t really understand the problem until after the first time you implement a solution.”

I think a lot of students can relate, I know I can. Not understanding how to implement a solution isn’t the sign of an immature developer. I think it’s simply the course of learning something novel and challenging.

Next week Dave and Chris will tease us a bit more before we get started on our projects. I guess I might as well start my research on the weekend because I’m getting restless.

DPS909 Week 2 – 2 September 17, 2009

Posted by Andor Salga in Open Source.
add a comment

Just as I arrived home on Tuesday, I checked my inbox and saw a Mozilla conference call was about to start. I’m required to listen in on a conference call as an exercise for my DPS909, so I joined the meeting. I had called in to other conference calls hosted by Mozilla in the past, but those had less than a dozen developers. This meeting had roughly 40. That’s a lot of geeks! Well, I suspect many were students from the OSD or DPS courses who also got Dave’s email, so we’re just geeks in training.

The meeting was very interesting. I jotted down some notes and spied on the concurrent IRC chat. The experts discussed a range of pertinent issues, one of which is this bug (493601). From what I understood, a developer traced a call which lead to a crash and found the error to be resident in a DLL.

I noticed that, (for the most part) the current speaker would introduce themselves before speaking. This made sense since we’re dealing with quite a large group. I did however recognize some voices including Vlad and Joe who I had met before.

Some other topics which were briefly mentioned included shader validation, asynchronous image decoding (so images load in another tab even though it is inactive). There was also some talk about WebGL and improving array performance with regards to Canvas3D rendering. I’m quite interested in that topic since it is a big issue in C3DL.

I look forward to my next conference meeting!

DPS909 Week2 – 1 September 17, 2009

Posted by Andor Salga in Open Source.
add a comment

This week in my DPS909 course, we were instructed to get involved in the Mozilla community. This involved a whole slew of activities.

First, I created an MDC and wikimo accounts. My MDC page is rather boring right now as it just lists this blog and my email address.

We were told to sign up for at least one mailing list. I gave the list a read and decided to sign up for the dev-tech-svg and dev-tech-gfx mailing lists since I’m interested in graphics.

I’ll save the other stuff for a later blog…

Our last lesson was quite enlightening as I found out I’m a web noob. To correct this, I have made plans to investigate some tech I heard about long ago, but was too busy/lazy to look into it at the time. These include: Google alerts, digg, slashdot and reddit.

I’ve been contacting my Seneca alumni friends and encouraging them to attend FSOSS. I think I’ll wear last year’s t-shirt because I think that’s what cool people do.

I decided to add a comment on Mickael’s blog. I know writing a separate blog and linking to the blog you’re responding to is sometimes a better idea, but I wanted to see what the difference was.

Follow

Get every new post delivered to your Inbox.