Now that the light has left a light source, it can interact with an object. For now, I'll
discuss the interactions at the surface of opaque objects. It is important to know how much light
is reaching any point on the surface of an object.
When a surface is facing the light, full on, the maximum amount of light is reaching it. The
full area of the surface is receiving light.
When the surface is angled slightly away from the light, the area facing the light is reduced.
Less light is actually reaching the surface.
When the normal vector of the surface is at right angles to the oncoming light, the light
simply misses the surface.
So the amount of light reaching any surface is a function of the orientation of the surface
to the oncoming light.
illumination = cos(theta) * brightness
(where theta is the angle between the surface normal and the direction of the light)
What Happens Next?
Now, the light has a choice. It can either be absorbed by the surface, reflect off the surface
or pass through the surface.
Some of the light will be absorbed into the surface. This simply warms the surface. As far
as computer generated images go, you can ignore it from now on.
Much of the light will bounce off the surface. The direction in which it bounces will depend
somewhat on the surface.
If the surface is extremely smooth, the light will bounce straight off, in the plane containing
the surface normal, and the incident vector of the light. This is what happens on the surface of
a mirror or a very polished piece of metal. The apparent brightness of the surface will depend
on the position of the eye.
If the surface if very diffuse (rough), the light will be scattered evenly in all directions.
Off the top of my head, I cannot think of any materials that are totally diffuse. Rough wood is
quite diffuse, as is matt paint, but both still exhibit some shininess. The apparent brightness
of the surface will not depend on the position of the eye.
Most materials fall somewhere between these two extremes. They have both diffuse and shiny
properties. To see the diffuse light coming off the surface, the position of the eye will not
matter, but to see the shine, you will have to position your eye carefully.
When Light passes through a surface, it is passing from one material into another. When this
happens, some quantum effects come in and cause the light to change direction. The change in
direction is known as Refraction. The exact change in angle depends on the orientation of the
surface and the properties of the two materials.
The properties are known as the Refractive Index. Space has an index of one, and air is
a little higher. More solid materials have higher indices.
Refraction is a complex subject, and takes a large amount of computing power. It is more
suitable to ray tracing than realtime graphics. I will not go into detail here, suffice to say
And After That?
After interacting with a surface, assuming it has not been absorbed, the light will continue
on to interact with more objects. A single photon may reflect off many objects before finally
coming to rest. These multiple interactions are difficult to model, and take quite a lot of time
to render. When rendering realtime graphics, it is usually assumed that a light interacts with
an object only once.
Until now, I have only talked about light as if there is only one kind. There is, in fact,
only one kind, but it comes in an infinite variety of flavours.
Since light can be a wave, it can have a wavelength. There are infinitely many wavelengths,
but our eyes can only see a small fraction of them. These inhabit what it known as the visible
part of the spectrum. Wavelengths range from the tiny (millionths of a millimetre) to the massive
The RGB Model
The human eye is capable of detecting three different wavelength bands from 400nm to
680nm. We perceive these as red, green and blue. These are the three primary colours. (Forget what
you might hear from an artist, telling you that the three primary colours are red, yellow and blue.
That is only true for paints.) The reason these are the three primary colours is not some quirk
of physics, but because our eyes contain chemicals to detect specific wavelengths, corresponding
to these colours. Non-primary colours, such as yellow or pink are simply combinations of the
Because of this, your TV and monitor contain red, green and blue pixels. This enables them
to reproduce almost any colour you can see. I say almost because the colour produced by a monitor
is not very pure, and so many hues are not obtainable exactly.
However, they are very limited in displaying
brightnesses. A monitor has a maximum displayable brightness, but the eye can see a several
orders of magnitudes of brightness. This can lead to difficulties when displaying images from
the real world which contain a large range of brightnesses. For example, a photo taken outside
containing the sky and areas of shadow. Later, when I have permission from my good friend
Matt Fairclough, I will explain a method which,
although will not solve the problem, will make it easier to display images with a wide range of
When modelling light on a computer, the three colours are usually handled separately. Except
in unusual circumstances the three colours do not affect each other. Sometimes true colour images
are created by rendering a red, a green and a blue image, then combining them.
Computers generally represent light by the amount of red, green and blue light it contains.
For example, White light is a mixture of equal parts of each. Yellow light is equal parts of red
and green. You might think of all the colours as existing inside a cube. One dimension represents
the red component, another the green component, and the last represents the blue component. The
cube contains all the colours that can be represented by your monitor.
Click here for a colour mixing demo.
The HSV Model
There is another way to think about colour, that tends to be easier for many people
to understand. This is the HSV model. HSV stands for Hue, Saturation and Value:
Again, there are three values here, and all the possible colours can be plotted inside a cube.
Note: This is also sometimes known as HSL, where L stands for Luminance.
Hue: The colour, or relative proportions of red, green and blue.
Saturation: The strength of the colour. This is equivalent to the colour control found on most TV sets.
Value: The intensity. Zero means black. Higher values mean higher intensity.
Practical Use of Light
Now that the basics of light are out the way, it's time to start thinking about it on a
Assumptions and Simplifications
As I said before, the exact method you choose to use to model light will depend on the
application. There are many assumptions that can be made to increase rendering speed.
Point Light Sources
For mathematical simplicity, light sources are usually considered to be single points in space.
Much of the time, this is not too far from reality. Light bulbs and spotlights tend to be quite
small compared with the objects they are illuminating. This becomes a problem, however, when
you want a scene to be lit by florescent strip lights, or by the sky. In this case, you can
approximate the system by using several dimmer point light sources to approximate a large one.
Calculating the effects caused by light reflecting from one surface onto another is complex
and time consuming. This is quite a reasonable assumption for objects in space. Space is so huge
compared to the objects it contains, that the effect of multiple reflections is negligible.
However, the difference between single and multiple reflections is quite noticeable in a small
room with white walls. Objects that are in direct shadow are still lit by light reflecting off
Although shadows can provide the viewer with lots of useful information about
the depth of a scene, removing them is not necessarily a great loss. Depending
on the circumstances, it is often possible to make assumptions about shadows.
For example, you might be making a flight sim. In this case, seeing the
shadows of aircraft can be very important in determining their distance from
the ground. The world of a flight sim is quite simple, quite flat with one
main light source, the sun, and all the objecs are fairly small and spread
out. This means that the you can easily get away with only rendering shadows
on the land from the aircraft. You needn't bother drawing shadows on the
planes themselves. The same tends to be true for other similar games.
Scenes with static light sources and scenery will also have static shadows. It is possible
to precalculate the positions of shadows once. Then, this information can be used to render
shadows quickly. Quake used this system to good effect. All the shadows are calculated when a
map is built. The shadows are stored as shadow 'maps' which are then combined with the textures