Transformations, Instancing and Motion Blur
Welcome to the third blog post of my raytracer development journey. In this part, I have added three main functionalities to my program, transformations, instancing and a leftover from distributed ray tracing - motion blur. I will be talking about my experiences during development of these features, which parts have been challenging, my approach, the bugs that I have encountered and how I have solved them.Before we begin, the first thing I did was to revert the recursive DRT to the normal version because as the program gets more complex, it would generate too many rays and take too long to render.
Transformations
Up to now, the vertex coordinates given in the scene description files have all been defined in world coordinate system. In order to define an object, you would have to know the properties of the space that it would be defined in. In order to solve this problem(and more), we define the objects in their local spaces and bring them to the world space by transformations.We have 3 types of transformations, translation, scaling and rotation which are represented by 4x4 matrices. Detailed explanations about matrices of these transformations can be found in any introductory computer graphics book.
It is apparent that I needed a matrix class with several operations on it. Rather than writing it myself and potentially dealing with bugs, I decided to use a third party library. I researched for a bit and found Math-Library-Test, there I found out that eigen performs operations faster compared to glm on general CPUs. Eigen has become free software starting from its version 3.1.1, however when I inspected their license section, I saw that some features still rely on third party code under LGPL license, which I did not like very much. I was probably not going to use them but nonetheless, I opted to use glm.
Rewriting BVH
In my past work, I was constructing the BVH having only triangles and spheres at the leaf nodes. In order to support instancing(which I will come to later), I decided to make a change in the construction of BVH. Now, after I read the triangles of a mesh, I create a BVH of it, and treat it as a primitive, meaning that this meshBVH will be a leaf node in the scene BVH.Previously, I was using a little trick at the leaf nodes. If I had only one object to build the BVH of, I was duplicating that object and putting it to both left and right of the BVH node. This allowed me to get rid of nullptr checks at a very reasonable price. However, with the change I made, this would mean duplicating the BVH of a potentially complicated mesh, thus I added the checks for nullptr as well.
One important thing to note regarding transformations and BVHs is that you can't directly apply the same transformations to the min, max of the bounding boxes of the object. Instead, you should apply the transformations to 8 bounding vertices of the box and get min, max of them.
Now, going back to how we integrate the transformations into the ray tracer, we actually do not transform each vertex of the object defined in its local space. Instead, we transform rays by the inverse transform of the object. This decreases the amount of transformations that you apply especially for a complex mesh with many triangles in it. Also, you should transform the HitRecord from the local space to the world space by directly applying the transformation. You actually don't need to transform t variable, it stays the same, but for hit point, you should apply transformations.
Instancing
I have actually spent some time on how I should approach instancing. In the end, I decided to the following:- After parsing the triangle mesh, build a BVH of it before applying any transformations, call it primary mesh BVH.
- Create an ObjectInstance for the mesh, store transformation matrix along with its inverse in that object, along with a pointer to primary mesh BVH.
- For each mesh instance, create an ObjectInstance, if resetTransform is set, generate the transform matrices with instance's transforms, else apply base mesh's transforms first, store the pointer to primary mesh BVH.
On a side note, vertex offset support is added, meaning that in the xml file, faces of the triangles start from the specified offset.
Motion Blur
Motion Blur is a very cool looking effect that required some small work to put into the ray tracer. The general idea is connected to distributed ray tracing as we generate rays with a random time variable and the multisampling works its magic to create this effect.Motion blurred object has a velocity vector. For time variable we take values between 0 and 1, so the maximum distance that the object could move on an axis is equal to the velocity vector. From the velocity vector, we create a translation matrix and for each ray, we multiply the last column of this matrix with ray.time. This way, the object is randomly moving and by taking multiple samples with different times, we obtain this effect.
As for the bounding box of a moving object, I calculate a "moved" box by adding the velocity vector to both min and max of the original box, and then calculate the surrounding box of these boxes. This way, I ensure that moving object is always checked correctly.
In my ray tracer motion blur is only added for translations, rotation is a little more tricky because getting the bounding box right is not so trivial.
Renders
![]() |
killeroo_glass.xml rendered in 39,822s |
![]() |
metal_glass_plates.xml rendered in 26,482s |
![]() |
cornellbox_dynamic.xml rendered in 2m19,774s |
![]() |
cornellbox_boxes_dynamic.xml rendered in 2m2,017s |
![]() |
dragon_dynamic.xml rendered in 5m56,611s |
Fixed Bugs
When I first implemented motion blur, I did not preserve the time variable of the rays that are generated from another ray (transformed ray, shadow ray, reflection ray etc.). This caused darkness on motion blurred objects, even though there was motion blur. I fixed it by preserving the time variable. You can think this is caused by the speed of light, it is pretty fast.![]() |
Motion blurred object is darker |
From the last part, I have found the bug on glossy reflections. I thought there was no need to normalize the vector that was the cross product of two normalized vectors. However, that vector may have less length than one. I normalized that in my orthonormal basis code.
![]() |
Glossy reflections aren't very glossy |
Comments
Post a Comment