Projective Rendering
- For each triangle of the object/mesh
- Project its vertices onto the screen
- For each pixel in the triangle on the screen
- Compute the colour
Utilizes parallelism to take advantage of SIMD (GPUs are fast at this!)
- Vertex shader: run for every vertex to transform it to normalized screen space
- Fragment Shader: run for every pixel to compute the pixel colour
Visibility Methods
How do we avoid rendering things that don’t contribute to the final image?
- View volume culling, vertex level: Cull iff all vertices are outside w.r.t to a single view volume plane
- View volume culling, object level: Cull iff distance between center of bounding sphere and any view volume plane is greater than the radius of the bounding sphere
- View volume clipping: add/modify vertices so that they are all within bounds
- Backface culling: we never see the backside of the object, cull if is below the plane of the polygon
- Occlusion culling, pixel level: use a z-buffer to determine depth at every pixel, only render if what you are about to render is closer (lower z) than what is currently in the buffer
- Occlusion culling, object level: build a bounding box and do a “virtual render” of the box. If no pixels passed the z-buffer then cull the whole object
Raytracing
Classical
- For each pixel in the image
- Generate a single ray from eye to the pixel
P :=
Intersection of closest triangle with raycolour_local :=
shadow
if not visiblePhong(N, L, rayDir)
if visible
colour_reflect := raytrace(R)
if reflectivecolour_transmit = raytrace(T)
if refractivecolour = k_local * colour_local + k_reflect * colour_reflect + k_transmit * colour_transmit
We stop ray tracing if
- Ray hits a diffuse object
- Ray exits the scene
- Exceeding some maximum recursion depth
- Contribution to final pixel colour will be too small
Because it only uses a single ray, most shadows are pretty hard.
This is noticeably different from path tracing, which produces incredibly realistic looking renders that look indistinguishable from photos. Path tracing uses many rays per pixel with the colour averaged across them. At each interaction (bounce/reflection/etc.), the ray direction changes randomly with some distribution. However, this is significantly more expensive to compute.
Path tracing
We can get more effects by adding more rays. Each path can be thought of as a random walk of light through the scene, with each bounce determined probabilistically based on the material properties of the objects and the lighting environment.
- Anti-aliasing: multiple samples per pixel
- Motion blur: multiple samples over time
- Depth-of-field (lens blur): multiple samples over lens aperture
- Glossy reflections: multiple reflected rays with random distribution
- Soft shadows: multiple shadow rays for area light sources
Coordinate Systems
Which way is up?
- Y is up
- Z is up
Can also either be (imagine both of the following where X is thumb, Y is pointer, Z is middle)
- Left-handed
- Right-handed
See also: NeRF, coordinate system
Intersection Tests
Line-plane
- Plane equation:
- , cross any three points on the plane that are not colinear
- is any point
- or
- Line equation:
- Plug line equation into plane equation and solve for
Ray-Triangle
Ray-Sphere
- Sphere equation:
- Each of , and are equations of the form
- is the center point of the sphere
- Take smallest positive value