Monday, May 18, 2009

Lets get progressive

One of the features that good old BMRT(R.I.P.) used to have was a default progressive refinement interactive display. I always thought that this is a very useful display method (specially for lighting) as it allows the user judge the final image as it is refining. No need to render the whole image when you know the modification to the lights you just made is way off!

I decided to try and replicate suck interactive display. It proved to be a neat little exercise. I'm sure there are ways to improve this algorithm but for now I must move over to implementing the camera class and hopefully soon enough the shading portion.

In the mean time I leave you all with this little video. Cheers




Progressive Interactive Display
(the video looks like crap, ill try to improve the quality later)

Tuesday, May 5, 2009

Go RenderRan! Go!

When i first started this renderer I was writing the images directly to disk. Opening the image with a viewer program every time I rendered something got old (and boring really quick). Since I wanted to simplify things I used the QT toolkit to create a simple window that would display the image that had been saved on disc, once the render was done. This made things a lot better, but the big problem was that i could not see the image being generated, I had to wait until the image was done to view it. So if the render was completely wrong, I didn't know it until the whole image was finished. This was a little frustrating so I decided to make the raytracer work with buckets like most "real" renderers do. Figuring out the bucketing algorithm was easy, but getting the QT window to update dynamically as the buckets where being rendered took a bit of digging around. Eventually I got it to work, so, here you have it... buckets!!




The renderer runs quite faster than it appears in this video,
but the recording software seems to slow it down

Saturday, May 2, 2009

Putting It All In To Perspective

I finally arrive to the part of the book that deals with perspective projections. At this point the perspective is achieved in a hard-coded way and there is no real interface to control the field of view. The reason for this is that I have not yet created the camera classes to handle the transformation of the view or the field of view of the projection.


First perspective projection rendered by RenderRan

Same as previous image but with a tighter FOV

The More The Merrier

After playing around with a single sphere, I moved on to the next section on the book, which discusses how to render more than one object in the scene. After a few code modifications I was able to get this image which contains 2 spheres and one plane. The plane is inclined and its "cutting" through the spheres.

Multiple objects in the scene

Playing with My Balls

Here are a couple of other rendered images using the same ball from the previous renders. Here I started playing with the code to see what kind if output I could get with the current code.

In this image I used the dot product of the viewing vector "I" and the current shading point normal "N". It allows the renderer to give the appearance of lighting (NdotI is essential for lighting calculations).

Since it took me a bit of work to get the NdotI image to work I decided to inspect my normals and this is the image I got. I think there might be something wrong, as the normals flip before they reach the edges.

First Steps

After dealing with a bunch of building issues I was eventually able to render my first set of images. These are very simple and might not be impressive at all but they are the raytracing equivalent of a "hello world" program.

This is the first image that RenderRan ever generated. A single, constant color, orthographic projected sphere.

Same sphere but with jittered sampling (1 sample per pixel).

Same sphere but with 64 jittered samples per pixel.

RenderRan - Who, Why and How

Who?

My name is Rudy and I'm a professional Technical Director working on the film industry. Over the last five years I have been lucky to work at several awesome places and on different fields such as lighting, particle effects, simulations, pipeline and shader development. Of all these areas of work shading has always been my favorite. There is something innately beautiful about the process of taking a bunch of lines of code and have the renderer return something that looks like glass, skin, marble, fire.. etc.

Why?

After many years of working in the industry and always being fascinated with the rendering process of computer generated imagery (CGI), I finally decided to take on a project that has been lingering on the back of my mind, Writing a Rendering Engine.

Why is it called "RenderRan". Well, every year (almost every year to be more precise) I attend the RenderMan users group meeting during Siggraph. At these meetings the folks from the Pixar RenderMan Team always hand out limited edition walking teapots. Eventually I had about 4 or 5 of them and my daughter decided that these collector items where awesome toys for a 3 year old. Since I knew my teapots where facing extermination I figured I should at least use the opportunity to teach my daughter a little bit of what his geek father does for a living. So I showed her the teapot and asked her "You know what this is? Its a RenderMan Walking Teapot". She stared at me for about 5 seconds and said "RenderRAN?". So, all the teapots as well as every hat or t-shirt with the RenderMan icon became "RenderRans".

How

I'm writing RenderRan in C++. Using the open source QT libraries to handle framebuffer display and simple image handling.
I'm using the book Raytracing from the Ground Up as reference.
The source code of the renderer can be found at http://code.google.com/p/rcraytracer