A Physical Model Of Light

A Physical Model Of Light

Light is a very complex system to model perfectly, which is why you see very few computer generated images that look photo-realistic. As is always the case, the more complex and realistic your simulation is, the more computation you will have to do, and thus the slower it will run. As a programmer, you have to decide what payoffs you are willing to make, whether you want your program to look amazing, and thus take an hour to render a single image, or to run at 60 frames per second and look like a cartoon.

This article will explain some of the physical principles that cause things to look as they do, with a view to them being rendered by computer. It will also talk about the assumptions that are often made to increase rendering speed.

One Photon

Light consists of tiny packets of energy called photons. A photon is both a particle and a wave, meaning that it can exhibit wave behaviour or particle behaviour as it pleases. These packets are emitted from an energy source, and travel in straight lines until they interact with an object. Light can also be totally weird and spooky if it feels like it.


A photon can do one of several things when it hits an object.

  • Reflection: The photon bounces off the object.
  • Absorption: The photon is absorbed, and gives the object it's energy.
  • Refraction: The photon travels through the object and changes direction depending on the properties of the object and it's surroundings.
  • Diffraction: If the photon just misses the object, or passes between two objects which are very close together, it can change direction.

    More Than One Photon

    There are very many photons. So many, in fact, that you might as well say that there were infinitely many. Because of this, you can ignore the fact that light is composed of photons, and assume if is just a continuous stream of energy. Statistical laws can be applied to them, and they will be very accurate because of the vast numbers of photons involved. So light can (easily?) be modelled on a computer.

    The interactions of light with objects enable us to see them. Light is emitted from an energy source. Trillions of photons rush out, interact with the surroundings, bouncing around all over the place. A few hit the dark bit in the middle of our eyes. Our pupils are black for a good reason that I will come onto shortly. The materials in our eyes adjust the paths of the photons slightly before they reach back of the eyes. There, they are absorbed by three chemicals. These chemicals give off signals to the brain. The brain interprets the pattern of the signals, and provides you with detailed information on your surrounding. The image you see is not the same thing as the physical objects themselves. All you are receiving is a pattern of energy that has undergone many complex interactions. A blue object is not actually blue. It appears to be blue because you interpret the light coming off it as blue.

    Through experience, the brain learns what the various patterns of light imply about the surroundings. Babies take an object, look at it for a moment, then put it in their mouth. Their tongue is an excellent touch sensor and can determine the shape and surface texture of an object nearly as well as the eye, sometimes better. The baby learns to associate what it sees in an object with the shape it's tounge describes. With time, the baby learns that the same object will look different depending on which way you hold it, even though it feels the same. You may think this is obvious, but it has been found that blind people, who have been given sight, find it very hard to comprehend. Likewise, they do not understand the concept of a shadow or a reflection, two things sighted people take for granted. Just because you can see does not mean you can understand what you see.

    This is the difference between Data and Information. Data is the pattern of light falling on the retina. Information is the interpretation of the pattern by the brain.

    In creating a picture of any kind, you are trying to create a pattern of light on the retina that will be interpreted as the object that the picture portrays. The experienced brain can be very smart, extracting vast amounts of information from an image. It is especially good at turning a 2 dimensional image into a 3 dimensional concept. To do this, it looks at the way the light has interacted with the scene, before it entered the eye.

    The various lighting models used in computer generated imagery are an attempt to increase the amount of information in an image for the brain to extract. When you write graphics routines, you should not be thinking, "I am doing a phong shader", instead you should be thinking "I am providing visual cues for the brain to interpret".

    What Information Can The Brain Interpret?

    The human brain can interpret 4 sets of information from a stream of visual data.

  • Form:
    This is the overall shape of the objects in the scene, and the edges that surround them. The eye actually contains a hardwired sharpening algorithm, exactly the same as that used in many paint programs, which helps to enhance edges.
  • Shade:
    Highlights, tones, shadows and textures.
  • Colour:
    Three colours can be detected by the eye; Red, green and blue.
  • Movement:
    The Brain is especially good at perceiving movement. Well camouflaged animals will give away their presence immediately if they move. Often, if I have lost the cursor on the screen, the easiest way to find it again is to move it.

    There are areas of the brain that deal with these 4 specific perceptions. This has been shown in many cases of brain damage. Occasionally a tumour knocks out an area of the brain that deals with one of the above, and the victim simply ceases to be able to perceive it anymore. In one case, a woman lost her ability to perceive movement. She could see just as normal, but was unable to detect the movement of objects. For example, she could see cars on the road, but was unable to tell at a glance if they were moving.
    Perception is something that most people take for granted. It is usually thought that the 4 above are conscious actions, and, if you can see, then you can see form, shade, colour and movement. This is not the case.

    You Are Forgiven

    Equally important is the information that the brain adds and removes. When we see, we are taking in vast amounts of information. It would be impossible to analyse and remember all that data to the finest detail. It would also be pointless. Most of the data that comes in is quite useless. The brain will automatically filter out much of the rubbish, allowing you to concentrate on the more important information. What is more, the brain will also add information that is missing. Bad engineering in the eye means that we have a blind spot on our vision. We never notice it though because it is filled in with something appropriate. The brain is very forgiving.

    What this means for the graphics programmer is that you do not always have to render the image in minute perfect detail, because much of it will be ignored and filled in. You can get away with producing images that are less than perfect. Apparently in Return Of The Jedi, one of the spaceships is a shoe. Nobody noticed though, because everyone expected to see a spaceship, and there was an object that was approximately the right shape, so they saw a spaceship.

    You can get away with even less detail if the scene is moving. Press pause on the video recorder and look at the static image. It looks rubbish, but you never notice when it's moving.

    The goal of the realtime graphics programmer is to provide approximations at visual cues that enhance the realism of a scene and create mood. Let the brain do the rest. The goal of the photorealistic programmer is to attempt to model the interactions of light in a scene accurately enough so that it will stand up to close scrutiny by an experienced brain.

    General Behaviour
    In this section I will begin discussing some of the general principles you can apply to programming graphics.

    Inverse Square Law

    How Bright Is A Light?

    Imagine you have a perfect lightbulb. This bulb has no volume whatsoever, exists as a point in space. It can be switched on and off at will, and takes no time to change state. This is the kind of light that it is possible to work with inside the virtual world of a computer. Such lights are impossible to create in the real world. As we shall see, though, real lights are very hard to create in the virtual world.

    Now, imagine you could flash this light on for a minute instance of time, the smallest slice of time that exists. In that instance, light begins to travel away from the energy source as the surface of a sphere. Imagine looking at a small bit of the sphere.
    As the light travels, the size of the sphere increases and so the size of that little bit increases. The brightness of that little patch is proportional to the density of photons in it. As the size of the patch increases, the number of photons remains the same, so the density of photons decreases.
    The surface area of a sphere is proportional to the square of it's radius. So the brightness of that little patch is proportional to one over the square of the distance from the light.

                    brightness = ----
    where k is some constant implying the brightness of the light source.

    This is the inverse square law. It is obeyed by all lights (except lasers).

    Cosine Law

    How Bright Is A Surface?

    Now that the light has left a light source, it can interact with an object. For now, I'll discuss the interactions at the surface of opaque objects. It is important to know how much light is reaching any point on the surface of an object.

    When a surface is facing the light, full on, the maximum amount of light is reaching it. The full area of the surface is receiving light.

    When the surface is angled slightly away from the light, the area facing the light is reduced. Less light is actually reaching the surface.

    When the normal vector of the surface is at right angles to the oncoming light, the light simply misses the surface.

    So the amount of light reaching any surface is a function of the orientation of the surface to the oncoming light.
                    illumination = cos(theta) * brightness
    (where theta is the angle between the surface normal and the direction of the light)

    What Happens Next?

    Now, the light has a choice. It can either be absorbed by the surface, reflect off the surface or pass through the surface.


    Some of the light will be absorbed into the surface. This simply warms the surface. As far as computer generated images go, you can ignore it from now on.


    Much of the light will bounce off the surface. The direction in which it bounces will depend somewhat on the surface.

    If the surface is extremely smooth, the light will bounce straight off, in the plane containing the surface normal, and the incident vector of the light. This is what happens on the surface of a mirror or a very polished piece of metal. The apparent brightness of the surface will depend on the position of the eye.

    If the surface if very diffuse (rough), the light will be scattered evenly in all directions. Off the top of my head, I cannot think of any materials that are totally diffuse. Rough wood is quite diffuse, as is matt paint, but both still exhibit some shininess. The apparent brightness of the surface will not depend on the position of the eye.

    Most materials fall somewhere between these two extremes. They have both diffuse and shiny properties. To see the diffuse light coming off the surface, the position of the eye will not matter, but to see the shine, you will have to position your eye carefully.


    When Light passes through a surface, it is passing from one material into another. When this happens, some quantum effects come in and cause the light to change direction. The change in direction is known as Refraction. The exact change in angle depends on the orientation of the surface and the properties of the two materials.

    The properties are known as the Refractive Index. Space has an index of one, and air is a little higher. More solid materials have higher indices.

    Refraction is a complex subject, and takes a large amount of computing power. It is more suitable to ray tracing than realtime graphics. I will not go into detail here, suffice to say it happens.

    And After That?

    After interacting with a surface, assuming it has not been absorbed, the light will continue on to interact with more objects. A single photon may reflect off many objects before finally coming to rest. These multiple interactions are difficult to model, and take quite a lot of time to render. When rendering realtime graphics, it is usually assumed that a light interacts with an object only once.


    Until now, I have only talked about light as if there is only one kind. There is, in fact, only one kind, but it comes in an infinite variety of flavours.

    The Spectrum

    Since light can be a wave, it can have a wavelength. There are infinitely many wavelengths, but our eyes can only see a small fraction of them. These inhabit what it known as the visible part of the spectrum. Wavelengths range from the tiny (millionths of a millimetre) to the massive (kilometres).

    The RGB Model

    The human eye is capable of detecting three different wavelength bands from 400nm to 680nm. We perceive these as red, green and blue. These are the three primary colours. (Forget what you might hear from an artist, telling you that the three primary colours are red, yellow and blue. That is only true for paints.) The reason these are the three primary colours is not some quirk of physics, but because our eyes contain chemicals to detect specific wavelengths, corresponding to these colours. Non-primary colours, such as yellow or pink are simply combinations of the primary colours.

    Because of this, your TV and monitor contain red, green and blue pixels. This enables them to reproduce almost any colour you can see. I say almost because the colour produced by a monitor is not very pure, and so many hues are not obtainable exactly.
    However, they are very limited in displaying brightnesses. A monitor has a maximum displayable brightness, but the eye can see a several orders of magnitudes of brightness. This can lead to difficulties when displaying images from the real world which contain a large range of brightnesses. For example, a photo taken outside containing the sky and areas of shadow. Later, when I have permission from my good friend Matt Fairclough, I will explain a method which, although will not solve the problem, will make it easier to display images with a wide range of intensities.

    When modelling light on a computer, the three colours are usually handled separately. Except in unusual circumstances the three colours do not affect each other. Sometimes true colour images are created by rendering a red, a green and a blue image, then combining them.

    Computers generally represent light by the amount of red, green and blue light it contains. For example, White light is a mixture of equal parts of each. Yellow light is equal parts of red and green. You might think of all the colours as existing inside a cube. One dimension represents the red component, another the green component, and the last represents the blue component. The cube contains all the colours that can be represented by your monitor.
    Click here for a colour mixing demo.

    The HSV Model

    There is another way to think about colour, that tends to be easier for many people to understand. This is the HSV model. HSV stands for Hue, Saturation and Value:

    Again, there are three values here, and all the possible colours can be plotted inside a cube.

    Note: This is also sometimes known as HSL, where L stands for Luminance.

  • Hue: The colour, or relative proportions of red, green and blue.
  • Saturation: The strength of the colour. This is equivalent to the colour control found on most TV sets.
  • Value: The intensity. Zero means black. Higher values mean higher intensity.

    Practical Use of Light

    Now that the basics of light are out the way, it's time to start thinking about it on a larger scale.

    Assumptions and Simplifications

    As I said before, the exact method you choose to use to model light will depend on the application. There are many assumptions that can be made to increase rendering speed.

    Point Light Sources

    For mathematical simplicity, light sources are usually considered to be single points in space. Much of the time, this is not too far from reality. Light bulbs and spotlights tend to be quite small compared with the objects they are illuminating. This becomes a problem, however, when you want a scene to be lit by florescent strip lights, or by the sky. In this case, you can approximate the system by using several dimmer point light sources to approximate a large one.

    Multiple Reflections

    Calculating the effects caused by light reflecting from one surface onto another is complex and time consuming. This is quite a reasonable assumption for objects in space. Space is so huge compared to the objects it contains, that the effect of multiple reflections is negligible. However, the difference between single and multiple reflections is quite noticeable in a small room with white walls. Objects that are in direct shadow are still lit by light reflecting off other surfaces.


    Although shadows can provide the viewer with lots of useful information about the depth of a scene, removing them is not necessarily a great loss. Depending on the circumstances, it is often possible to make assumptions about shadows. For example, you might be making a flight sim. In this case, seeing the shadows of aircraft can be very important in determining their distance from the ground. The world of a flight sim is quite simple, quite flat with one main light source, the sun, and all the objecs are fairly small and spread out. This means that the you can easily get away with only rendering shadows on the land from the aircraft. You needn't bother drawing shadows on the planes themselves. The same tends to be true for other similar games.

    Static Shadows

    Scenes with static light sources and scenery will also have static shadows. It is possible to precalculate the positions of shadows once. Then, this information can be used to render shadows quickly. Quake used this system to good effect. All the shadows are calculated when a map is built. The shadows are stored as shadow 'maps' which are then combined with the textures at runtime.