Object Lights and Path Tracing
Welcome to the sixth part of my raytracer development blog posts. This part was the most challenging one by far. I have added objects which are also lights (luminous objects). Before, the light we have used were invisible (ex. pointlight), they only lit up the scene. Also added Spherical Directional Lighting which allows an hdr texture in a special form to be used as a light source for the scene. And lastly, added path tracing. Path Tracing is a monte carlo method for global illumination. I will be giving implementation details in the following sections.![]() |
cornellbox_jaroslav_diffuse.exr saved in 6,990s |
![]() |
cornellbox_jaroslav_glossy.exr saved in 6,252s |
Light Sphere
A light sphere is both a light source and a sphere object. This means a LightSphere should inherit both from the abstract light class and the sphere class. This means it must implement the sampleLight (sample a point from the light source, make shadow check and calculate radiance given the point of intersection) methodWhen sampling the LightSphere, given the point of intersection, we generate an orthonormal basis along the surface normal. From the point, only a part of the sphere can be seen and that forms a cone. We need to sample a uniform direction within this cone. After we sample the direction, a ray can be sent to object itself to find the sample point. Then, we check if the intersection point should be in shadow according to the sample point on the light. If it is not shadowed, then the radiance value of the light can be returned after dividing it by the pdf.
One very important note is that, after finding the sample point on the light and when checking if there is any object between the intersection point and sample point, the light object might get hit and point might end up being shadowed, when it should have been lighted. This happens due to floating point arithmetic. I have thought of 3 possible solutions, simplest one is to just move the sampled point along the shadow ray direction by an epsilon. This works fine. A second method is to add a bool variable to the hit method so that hit returns false on luminous objects. This is a very clean solution for LightSpheres and the one I used. Lastly, the pointer to the object might be put in the hit record so that a pointer equality check might be applied on shadow check. This actually might be cleaner (especially for LightMeshes), however I have not really seen this approach being applied in books.
For hit and shadowHit methods, what I do is to call Sphere::(shadow)hit method directly and putting the radiance of the light in the hit recorid then return. There is also a bool variable in hit record, which indicates if the object hit is luminous or not. Using both of these variables, if a luminous object is hit, we can set its color to the radiance value directly.
![]() |
cornellbox_jaroslav_glossy_area_sphere.exr saved in 8,145s |
![]() |
cornellbox_jaroslav_glossy_area_ellipsoid.exr saved in 8,810s |
LightMesh
LightMeshes are very similar to LightSpheres. Once again, sampleLight method must be implemented and hit methods should be modified just as in LightSphere. LightMesh allows us to have light sources with arbitrary geometry.Difference from LightSphere is that we do not sample direction, instead we directly sample a point on a triangle. Firstly, we need to sample a triangle from the mesh. We do so by considering areas of the triangles. Larger triangles have a higher probability of being chosen. It should be noted that transformations may have an effect on triangles area, so they should be accounted for (area calculations should be done in world space). After calculating areas, we generate the cdf. Then, one can generate a random number between 0-1 and chosee a triangle. After that, we should uniformly sample the triangle and find our light sample. Once again, we must divide the radiance value with the pdf.
There is an important issue that we must take care of. When sampling a triangle from the mesh, a back facing triangle might get chosen. In that case, such a triangle should not light the intersection point as the mesh itself would be in its shadow ray path. For such cases, I chose to return 0 radiance.
![]() |
cornellbox_jaroslav_diffuse_area.exr saved in 7,89s |
![]() |
cornellbox_jaroslav_glossy_area.exr saved in 7,124s |
![]() |
cornellbox_jaroslav_glossy_area_small.exr saved in 7,61s |
SphericalDirectionalLight
SphericalDirectionalLights are used to simulate global illumination without being computationally instensive. We are given an hdr texture in a special form that we call the environment map, we think that our scene is inside a sphere that is far away. For a point, we sample a direction in the hemisphere aligned with the surface normal. We send a shadow ray in that direction. If the ray does not hit any object, then it will hit the sphere that encloses the scene (there doesn't exist a sphere in the scene, we pretend like it does). Then we can find u, v coordinates and fetch from the given environment map. That value will be the radiance at the point but we should divide the radiance by pdf (1 / 2pi).![]() |
head_env_light.exr saved in 2m16,427s |
Path Tracing
Path tracing is a global illumination algorithm where we use monte carlo method to calculate the indirect illumination at a point. So far, we have been calculating only the direct illumination at points. For each light source, we sent a ray from the intersection point the the light source and calculated radiance. In real life however, a point might receive light from nonluminous objects that reflect light. The rendering equation suggests that we should assume there exists a hemisphere aligned with the surface normal and integrate over all the directions on that hemisphere to calculate the radiance value at the point. Since this is not feasible, we use monte carlo integration instead. Indirect lighting at a point is calculated by sampling a direction from this hemisphere, spawning a ray in that direction and tracing its path. When that path returns, we get a radiance value. Multiplying it by brdf and cos(theta_i), we end up calculating the indirect radiance at the point. As you increase samples per pixel, you get less noisy images as the results converge to the rendering equation.Path tracing was (surprisingly) not so difficult to implement. In the scene files, there is the Renderer and RendererParams added to support path tracing. There are 3 RendererParams for path tracing. Firstly, NextEventEstimation means that we should send rays directly to light sources (just like direct illumination) in addition to indirect rays. This method reduces noise by a considerable amount. An important note is that if an indirect ray hits a luminous object, we should not return 0 radiance from the light as we are already sampling the light sources directly. Secondly, ImportanceSampling means that we would like to sample more directions closer to the surface normal instead of uniformly sampling the sphere. Lastly, RussianRoulette is used in terminating rays. Before, what we did was to terminate rays according to a max recursion depth. Also for indirect rays we used this variable. However, there is no uniform max recursion depth variable we can set for any scene. Instead we assume rays have a potential that decreases as it reflects from a surface and we generate a random variable to decide if we should terminate a ray by comparing its potential with the random variable.
If we go back to Renderer parameter, I wanted to add the option to support any renderer because as a project I chose Photon Mapping and the rendering is a bit different from ray tracing or path tracing. For this purpose, I introduced Integrators. I actually stumbled across this design idea from pbrt and decided to use a similar approach. There is an abstract base class called Integrator with pure virtual method render(...). Then, I refactored the code to create two derived classes called RaytracingIntegrator and PathtracingIntegrator. This allows one to add any sort of renderers by just implementing the render method.
![]() |
diffuse.exr saved in 17,608s |
![]() |
diffuse_importance.exr saved in 18,775s |
![]() |
diffuse_importance_russian.exr saved in 12,663s |
![]() |
diffuse_next.exr saved in 30,133s |
![]() |
diffuse_next_importance.exr saved in 32,721s |
![]() |
diffuse_next_russian.exr saved in 21,917s |
![]() |
diffuse_next_importance_russian.exr saved in 22,978s |
![]() |
diffuse_russian.exr saved in 12,527s |
![]() |
glass.exr saved in 26,254s |
![]() |
glass_importance.exr saved in 27,264s |
![]() |
glass_importance_russian.exr saved in 19,541s |
![]() |
glass_next.exr saved in 43,703s |
![]() |
glass_next_importance.exr saved in 47,780s |
![]() |
glass_next_importance_russian.exr saved in 34,315s |
![]() |
glass_next_russian.exr saved in 33,684s |
![]() |
glass_russian.exr saved in 18,644s |
![]() |
sponza_direct.exr saved in 43,123s |
![]() |
sponza_path.exr saved in 9h31m34,918s |
![]() |
VeachAjar.exr saved in 1d15h16m36,33s |
Comments
Post a Comment