This is a very old article, included for historical interest only!
Reflections are so easy to implement in a ray-tracer, it’s not suprising that so many ray-traced images contain them. To implement reflectivee surfaces, we need to extend (or composite) our materials to support a reflectivity factor - this ranges from zero (absolutely no reflections) to one (a perfect mirror). Equally, as the reflectivity ranges from zero to one, the natural colour of the surface is scaled from one to zero.
Calculating the colour of a reflective surface is (usually) a recursive process - a secondary ray is cast from the reflective surface, and used to calculate the contribution to the primary colour. Of course, if the secondary ray hits a reflective surface, another ray will be fired, and so forth… so a limit of some kind is usually applied to this process, to end any potentially endless recursion.
Let’s hope the above image makes things clearer. On the left is the camera/eye, casting a primary ray (marked “1”) which intersects a reflective sphere. (We can see that the colour of the pixel should be green).
Since the surface is reflective, we recursively cast a secondary ray (marked “2”) at a vector that is the reflection of the primary ray against the surface (calculated against the intersection normal, “N”).
This secondary ray intersects another object (the green sphere), and here we calculate the colour as usual based on the lighting calculations.
Since the final object wasn’t reflective, the recursion ends, and the pixel is assigned the colour calculated via the secondary ray.
Here’s my code for a reflective material: