Computer Graphics Raytracing project
Please note - the information on this page is subject to change until the course starts.
General note: In various exercises you will be asked to implement new functionality. In these cases your ray tracer should accept the (syntax of the) example scene files provided. Under no circumstances should your ray tracer be unable to read older scene files (those that do not enable the new functionality), or modify the interpretation of older scene files.
Getting Started: A raytracer framework
In this assignment you will setup the environment for your ray tracer implementation. A note on programming languages: the framework we provide is written in C++, however, you are free to use any other language (but be sure your raytracer works on the lab machines).
Tasks:
- Install the source code of the ray tracer framework. The C++ version includes build files for gcc/MingW (through a Makefile) and Microsoft Visual Studio 2003 (later versions of VS automatically update the solution; note that if there is a compilation problem after the conversion then you may need to set the Target Platform Version to the most recent version in the project settings):
Compile it and test whether it works. Using the supplied example scene
scene01.yaml
the following image should be created:
This is a simple default scene that is not yet completely raytracted. Just for your reference and to give you an idea of where we are heading, this is what the scene composition actually looks like when correctly raytraced and when viewed from the side:
- Look at the source code of the classes and try to understand the program. Of particular importance is the file
triple.h
which defines mathematic operators on vectors, points, and colors. The actual raytracing algorithm is implemented in scene.cpp
. The YAML based scene files are parsed in raytracer.cpp. Look at the included README file for a description of the source files.
Compilers
You have access to a variety of compilers
Visual Studio with automatic raytracing after compilation
- Right-click on the Raytracer project in the Solution Explorer
- Find Configuration Properties > Build Events > Post-Build Event
- add
$(OutDir)$(TargetFileName) scene01.yaml scene01.png
under Command Line (this assumes that the yaml file is located not in the Debug directory but in the main source file directory, and this is also where the resulting image will be saved)
- if you want your generated picture to be shown right after raytracing, add another line that calls a picture viewer with the generated png file (use the macros that VS provides; e.g.,
"C:\Program Files\IrfanView\i_view64.exe" "$(RemoteDebuggerWorkingDirectory)scene01.png"
)
- You need to do this for both Release and Debug compilation such that it works if you switch between the two
- Note that, for recent versions of Visual Studio (e.g., 2022), it seems that the use and the logic of post-build-events and the use of debuggers has changed. If you run the solution, then this automatically starts the debugger, which by default does not have access to the command line paramaters. So add them by right-clicking on the Raytracer project in the Solution Explorer and then under Configuration Properties > Debugger > Command Arguments enter
scene01.yaml scene01.png
. Then the run process is as follows:
- If a complete re-build happens, after the compilation the Post-Build Events are executed (so running the exe and then displaying the result, if you follow the instructions above). Then, the Debugger is called and the exe is run again.
- If no complete re-build happens, then the Debugger is called directly with the command options, but the resulting image is not displayed.
- The latter step happens even if the run is started as "Start without debugging", even then the debugger settings are used to start the compiled program.
- So maybe it is best not to use the Post-build events settings anymore, but instead to set the Command Arguments for the debugger, and then to have an image viewer like IfanView open that reloads the image as it is updated.
- Alternatively, set the Post-build events settings and then use the Build Solution call to compile, and do Clean Solution if you want to force a reboot. You can add the respective build icons to the toolbar with a right-click on it (but the clean solution action is not automatically included, you need to manually add it to the build toolbar) or you right-click on the solution in the Solution Explorer, where the rebuild actions are also available.
1. Raycasting with spheres & Phong illumination
In this assignment your program will produce a first image of a 3D scene using a basic ray tracing algorithm. The intersection calculation together with normal calculation will be the groundwork for the illumination.
- For now your raytracer only needs to support spheres. Each sphere is given by its midpoint, its radius, and its surface parameters.
- A white point-shape light source is given by its position (x,y,z) and color. In the example scene a single white light source is defined.
- The viewpoint is given by its position (x,y,z). To keep things simple the other view parameters are static: the image plane is at z=0 and the viewing direction is along the negative z-axis (you will improve this later).
- The scene description is read from a file.
Tasks:
- Implement the intersection calculation for the sphere. Extend the function
Sphere::intersect()
in the file sphere.cpp
. The resulting image should be similar to the following image:
- Implement the normal calculation for the sphere. To this end, complete the function
Sphere::intersect()
in the file sphere.cpp
. Because the normal is not used yet, the resulting image will not change.
- Implement the diffuse term of Phong's lighting model to obtain simple shading. Modify the function
Scene::trace(Ray)
in the file scene.cpp
. This step requires a working normal calculation (which is implemented in the function Sphere::intersect()
in the file sphere.cpp
, where the normal is returned as part of a Hit
). The resulting image should be similar to the following image:
- Extend the lighting calculations with the ambient and specular parts of the Phong model. This should yield the following result:
- Test your implementation using this scene file. This should yield the following result:
Results (yaml + image) to submit (minimum requirements):
- image of the default scene with defuse and ambient light only,
- image of the default scene with the complete Phong model implemented, and
- image of the second scene file (scene02.yaml) with the complete Phong model.
2. Normal buffer & z-buffer & additional geometry
In raytracing, Hidden Surface Removal (HSR) is basically included in the technique and you are using (without noticing) a z-buffer-like algorithm in your raytracer. In this assignment you will gain a deeper understanding of this process. In addition you will
create a normal buffer.
Tasks:
- Adapt your program such that it can also produce a z-buffer image instead of the usual rendering. This new render mode should be configurable in the scene file. Thus introduce a
RenderMode
directive for the YAML
file, which can be set to zbuffer
instead of the default phong
(or use numbers to encode the mode). Use gray levels to code distances. Note: Also read in the near and far distances that are needed for the z-buffer from your scene file, do NOT compute it based on the scene. An example z-buffer image:
- Adapt your program such that it can also a produce a normal buffer image instead of the normal rendering (again, this should be configurable in the scene file, name it
normal
or use a third number code for it). Map the three components of a normal to the three colors (be sure to map the possible range of the components (-1..1) to the range of the colors). Two example normal buffer images (of two different eye positions, the second is taken from [1000,200,200], and needs an adjustment in scene.cpp to look in the right direction):
- Implement one more geometry (teams with 3 ppl.: two more) from the following list (submit corresponding YAML and PNG files):
- Quad
- Plane
- Box
- Cylinder
- Cone
- Triangle
- Torus
Only three things need to be added for each new geometry:
- Reading the parameters
- Intersection calculation
- Normal calculation
- (Bonus) Experiment with your own scene descriptions. Even with just spheres you can build some interesting scenes!
Use the following additional(!) scene files to test your implementation:
Results (yaml + image) to submit (minimum requirements):
- image of the default scene in z-buffer rendering mode (scene01-zbuffer.yaml),
- image of the default scene in normal buffer rendering mode (scene01-normal.yaml), and
- image(s) that shows the additional implemented geometry (geometries).
3. Optical laws
In this assignment you will implement a global lighting simulation. Using recursive ray tracing the interaction of the lights with the objects is determined. The program should be able to handle multiple colored light sources and shadows.
Tasks:
- Extend the lighting calculation in
Scene::trace(Ray)
such that it produces shadows. First make it configurable whether shadows should be produced (e.g., Shadows: true
). The general approach for producing shadows is to test whether a ray from the light source to the object intersects other objects. Only when this not the case, the light source contributes to the lighting. For the following result a large background sphere is added to the scene (scene01-shadows.yaml).
- Now loop over all light sources (if you didn't do that already) and use their color in the calculation. For the following result two different lights were used (scene01-lights-shadows.yaml).
- Implement reflections, by recursively sending new rays in the direction of the reflection vector. Add the contribution of these rays as an additional color contribution, multiplied by the specular component (as it models the amount of direct reflection of an object). You can think of the returned color values as independent light sources for which only the specular reflection is computed (ambient makes no sense at all since we already have accounted for it, and diffuse reflection is better approximated by NOT taking it into account in this coarse approximation). And example result with a maximum of two reflections (scene01-reflect-lights-shadows.yaml):
If your output looks like this, you may want to have another look at your sphere intersections:
- (only teams with 3 ppl.) Implement refraction (see below).
Results (yaml + image) to submit (minimum requirements):
[all images with shadows enabled, reflection enabled]
- image of the default scene with shadows for first light source (scene01-shadows.yaml),
- image with shadows enabled for multiple light sources (scene01-lights-shadows.yaml), and
- image with shadows for all light sources and reflection enabled (scene01-reflect-lights-shadows.yaml).
- (only teams with 3 ppl.) image with shadows for all light sources and reflection + refraction enabled (see example below)
(Bonus) Implement refraction, by recursively continuing rays in the direction of the transmission vector, using the parameters refract
and eta
, where eta
is the index of refraction of the material. You can think about the recursion for refraction (and reflection) this way:
The image you should get with reflection should look something like this (courtesy of Brian Martin):
(Bonus) You'll notice that the specular reflections from the scene are not blurred, like the light sources. One way to do something about this is to sample along multiple rays (around the reflection vector) and average the results. Note that for a correct result you should be careful about selecting your vectors and/or the way you average them. Hint: if you take a normal average you should select more rays in those areas where the specular coefficient is high.
(Bonus) Test your implementation using a scene you designed yourself.
4. Anti-aliasing & Extended Camera Model
This assignment starts with anti-aliasing, which will result in better looking images. In addition you will make it easier to move the camera position by implementing an extended camera model.
- Implement super-sampling (anti-aliasing), i.e., casting multiple rays through a pixel and averaging the resulting colors. This should give your images a less jagged appearance. Note that you should position the (destinations of the) initial rays symmetrically about the center of the pixel, as in this figure (for 1x1 and 2x2 super sampling):
- Again, make sure this is configurable in the scene file. The default should be to have no super sampling (or, equivalently, super sampling with a factor of 1). An example of 4x4 super-sampling (scene01-ss.yaml):
- Implement an extended camera model such that other image resolutions are possible and producing images becomes more flexible. You should keep support for the Eye parameter for backwards compatibility, but allow the specification of a Camera object (instead of the Eye parameter). You should support an eye position, a reference point (center) as in OpenGL, an up vector and a viewSize (the size of a pixel could be determined, for example, by the length of the up vector; alternatively, specify a new multiplication factor that determines the actual resolution of the resulting image). An example of what this should look like (scene01-camera-ss-reflect-lights-shadows.yaml and scene01-zoom-ss-reflect-lights-shadows.yaml):
- For more information on constructing a view, see A Simple Viewing Geometry and chapter 7 of your book. And keep in mind that the length of the up vector determines both the "vertical" and the "horizontal" dimensions of a pixel (you can implement additional functionality to allow for stretched views if you want).
Results (yaml + image) to submit (minimum requirements):
[all images with shadows enabled, reflection enabled, and (if you implemented it) refraction enabled]
- image of the default scene from the default camera location at 400x400, but with 2x2 super-sampling,
- image of the default scene from the default camera location at 400x400, but with 4x4 super-sampling (scene01-camera-ss-reflect-lights-shadows.yaml),
- image of the default scene from the default camera location at 800x800 but showing the same view/contents, with 2x2 super-sampling,
- image of the default scene from the default camera location at 800x400 (i.e., now a wider view), with 4x4 super-sampling (scene01-zoom-ss-reflect-lights-shadows.yaml, and
- image of the default scene at 800x400, but from a different camera location, with 4x4 super-sampling; for example, you could try to achieve a view like the one shown in the beginning (or some other interesting one):
(Bonus) Implement apertureRadius
and apertureSamples
parameters for your camera object and use them to simulate depth of field by taking apertureSamples
positions (uniformly) within apertureRadius
of the eye (note that this disc should be formed using the up and right vectors). This can look like the figure below, using Vogel's model with n in [0,apertureSamples)
, r=c*sqrt(n)
, th=n*goldenAngle
and c=apertureRadius/(up.length()*sqrt(apertureSamples))
to sample the aperture (scene01-dof-ss-reflect-lights-shadows.yaml):
(Bonus) As you may have noticed this tends to create visible "rings", instead of blurring the out-of-focus objects. To combat this, you should make sure that in the computation of the angle n
is offset by the index of the (sub)pixel in the image (so if the image is 10x10 with no supersampling the last index is 99), and similarly that in the computation of the radius n
is offset by fmod(pixel_index*golden_ratio,1.0)
(or similar). This works because the golden ratio (and similarly the golden angle) is irrational and very good at producing a sequence of numbers that "looks random" (but more evenly distributed). The effect can be seen in these images (the left image uses the exact same scene file as before, the right image has been made using apertureSamples=16
instead of 4):
(Bonus) Implement additional supersampling types (see Wikipedia for general information).
(Bonus) Add an optional velocity
attribute to objects (default is [0 0 0]) and an exposureTime
attribute to Camera
(default is 0). Implement motion blur, assuming that the exposure is between -exposureTime/2 and exposureTime/2, and that at time t
an object is at position+t*velocity
. You should probably also add another attribute exposureSamples
to control the number of samples to take, and feel free to vary as much as you like on the theme (you can also let the camera move for example, or allow for more than just linear motion).
5. Texture mapping and alternative illumination models
Tasks:
- Implement texture mapping. With textures it becomes possible to vary the lighting parameters on the surface of objects. For this a mapping from the points of the surface to texture-coordinates is needed. You might want to make a new pure virtual function in Object (and give it a non-trivial implementation in at least Sphere) for computing texture coordinates so that it only has to be done for objects which actually need it. See Links and References for links to example textures to use and section 11.2 (2D texture mapping) of your book for how to compute the texture coordinates. For reading the textures you can use the following line:
Image *texture = new Image("bluegrid.png");
, and access the pixel data: texture->colorAt(float x, float y)
, where x and y are between 0 and 1.
Please note that many of the planetary texture maps are provided as JPEGs, not as PNGs. Renaming from *.jpg to *.png does not help here. It may still show in your OS (OS'es often gracefully ignore such blunder), but the file is still a JPEG regardless of its extension. To convert it into a PNG you have to use some image processing tool such as Gimp or whatever you prefer.
- Also implement rotation of (at least) spheres (if you haven't already). In connection to the texture coordinates above, think about it this way:
- before we defined the sphere as a point and a radius
- now we we need to be able to define texture coordinates, so we need some way to define the latitude and the longitude on the sphere
- so we could add an additional axis via one more (normalized) vector in the yaml file; this allows us to easily define the latitude (i.e., the first part of the texture coordinates) via a dot product of any point on the sphere with the "axis" vector
- but we still do not know where the longitude starts with 0 and ends with 360 degrees
- so we add (again define it in the yaml file) another vector (roughly perpendicular to the first to point to the start/end point on the equator (in Earth's case that would be the longitude of London-Greenwich)
- from the two defined vectors, the axis vector and the side vector, we can now properly derive a perpendicular local coordinate system by using the cross product to compute two perpendicular vectors in the equator plane, and with these can compute the longitude and thus the second texture coordinate
- with all that in place you now can implement rotations fairly easily by defining an offset to the longitude part of the texture coordinate via an extra angle that you also get from the yaml file
- of course, to make it (mathematically) fully correct using the sketched idea, one would need to use an Earth texture that would look a bit, well, unusual:
- but since the coordinates are not visible in the raytracer, the actual 0/360 degree longitude does not really matter and "normal" texture maps work just fine (for our purposes)
You are NOT required to exactly follow the rotations the way sketched above, another option is to implement rotation using quaternions. Some helpful information can be found here:
- In particular for illustration purposes alternative illumination models have been developed, which are rather easy to implement in your raytracer. In this assignment you will implement one of these models.
Your task is to implement the illumination model by Gooch et al. (only the illumination part, not the outlines) Be aware of the following:
- The formula for the lighting calculation in the original paper (PDF only accessible from within the university network, if outside use your favorite search engine to get the PDF) is not correct. Use this one: I = kCool *(1 - dot(N,L))/2 + kWarm * (1 + dot(N,L))/2 (note that for this formula it is not necessary that dot(N,L)<0).
- The variable
kd
in the paper can be set to lights[i]->color*material->color*material->kd
.
- Extend the scene description for the new parameters
b
, y
, alpha
and beta
(reminder: your ray tracer should still accept files that do not set these parameters).
- The Gooch model should not replace the Phong model, instead which model will be used should be configurable.
- Gooch should not use ambient lighting, but it can use the same kind of highlights as in Phong shading (the specular component of Phong shading).
- When using Gooch shading you may ignore shadows and/or reflections, but you are not required to (and you might be able to get some interesting effects by not ignoring them).
The resulting image could look like the following (for the first image scene01-gooch.yaml is used):
For testing your texture mapping code you may want to use this YAML scene description (you may need to adjust it to your specific YAML scene specification) and this texture image:
bluegrid.png
blue-earth.yaml
Results (yaml + image) to submit (minimum requirements):
[all images with at least 800x800 resolution, 2x2 super-sampling enabled, shadows enabled, reflection enabled]
- texture coordinate image (red-blue for u-v) in which the object(s) to be texture-mapped show(s) the correct texture coordinate colors,
- image with the same object(s) texture-mapped,
- image with textured object(s) rotated, and
- image with Gooch illumination mode (textures included).
(Bonus) Implement bump-mapping. Some example links to pages with matching pairs of regular texture images and bump maps can be found on the Links and References page. The results could be something similar to the following (texture coordinates, texture mapping, bump mapping, normal buffer):
(Bonus) For the Gooch shading: add a black line to the silhouettes of your objects. This should create a really nice effect (as was seen in the tutorial presentation). For more information see these slides on image-space computation of contours/silhouettes (most examples in the slides were generated with the raytracer).
6. More geometries and 3D mesh files
- Implement an additional geometry type from the above or following list. Make sure however, you have implmented at least a triangle.
- Quad
- Planes (determined by a point and a normal)
- Polygon (determined by corner points)
- Cylinder, Cone, parabolic surfaces
- Torus (can have 4 intersections points)
- Blobs
- Free-form surfaces
So, after finishing this task your raytracer should support four different geometries: a sphere, a triangle, and two other.
- Implement 3D mesh objects (read from a file). Use the code (
glm.c
and glm.h
) from an OpenGL Project (just remove the drawing code, this way you do not have to link to OpenGL). You can use the same models as for OpenGL-based rendering (obj.zip or OBJ files from INRIA's 3D Meshes Research Database; use MeshLab to view and convert models you find there), but be aware that producing a raytraced image of a model with many triangles can take a long time. For example, the following image of an evil golden rubber duck (with 3712 triangles) took almost nine hours to generate on a reasonably fast machine (with 3x3 super-sampling, relatively unoptimized code).
- Include your coolest result(s) in the archive that you hand in. These results will be used for the gallery page of this year. You can use images (renders & screenshots), but videos are allowed too :) Please don't hide these files too deep in your archive, so that I can easily spot them.
Results (yaml + image) to submit (minimum requirements):
[all images with at least 800x800 resolution, 2x2 super-sampling enabled, shadows enabled, reflection enabled, potentially texture maps enabled]
- image that shows the additional geometry implemented,
- image that shows the dolphin 3D model (from the obj.zip archive and potentially other shapes),
- coolest image you produced, and
- images that show potentially implemented bonus tasks (include a description).
(Bonus) Implement constructive solid geometry (CSG). Here is an example of a rendering with cylinders and CSG objects:
Or a more complex CSG shape:
(Bonus) Raytracer extensions
The possibilities for extending your raytracer are endless. For inspiration, take a look at the following list (for more information check your Computer Graphics book or the internet):
- Exposure time (motion blur)
- Soft shadows
- Depth of focus
- Lens flare
- Optimizations:
- Reduction of the number of rays:
- Adaptive super-sampling and sub-sampling.
- Insignificance test: when the weight of a ray becomes smaller than a certain value the contribution of the ray is negligibly small and the recursion can be stopped.
- Reducing the number of objects to do intersection tests on.
- Faster rendering for primary rays: which object can be seen in which pixel can be determined by a conventional renderer (z-buffer, scanline).
- Bounding volumes
- Space-Subdivision methods
- Distributed ray tracing
- Parallelization. Raytracing is inherently parallel and it is fairly easy to parallelize, in particular on today's multi-core PCs. Techniques to consider include:
- Non-Photorealistic Rendering (NPR):
- Obtain inspiration for your own ideas here