Understanding how Satellite Images are created


While our eyes only detect a fraction of all light available, satellite sensors can actually capture – and send back — much more information. This information is relayed back to us in a format quite different from the photographs we are used to. 

Satellites capture data by assigning a digital value to a pixel based on the reflectance captured by the sensor of the corresponding area on the ground (and above atmosphere) on a predetermined band within the light spectrum. A high reflectance means a high value, low reflectance, a low value. When presented as an image, the high values are white, the low values blacks. That means all satellite data is black and white before processing.

For example, a snowy mountain top will appear close to white in all images from bands within the light spectrum, because snow reflects all visible light. But it will be darker in infrared images because snow has less reflectance in the infrared.

Creating composite images

To produce coloured composites, data from two or more bands of the spectrum are merged together and each band is assigned a colour. Most satellite images you will see have been modified in some way as even natural colour images tend to have low contrast and have a blue, hazy hue — making it hard to distinguish between features.

True-colour composite images, such as the ones found on Google Maps, use the red, green, and blue bands gathered by satellites to mimic the range of vision for the human eye, showing us images closer to what we would expect to see in a normal photograph.

Since Google Maps images are the results of multiple satellite passes at different days (to capture the entire scenes and ensure they were no clouds), they have also been digitally fused to smooth out the transitions from one image to another and make it look like one large image to the users.

Additionally, satellites also capture information in the non-visible part of the light spectrum. Different features: rock, bare soil, vegetation, burned ground, snow, sediment-rich water, etc. all have different reflectance properties in each band. This a called a 'spectral signature'.

To highlight specific features, one or more of the RGB bands can be substituted for another, such as infrared, or near infrared, which are not visible to the human eye. These images are referred to as false-colour composite images.

To better discriminate between features and highlight changes in time, mathematical models can also be applied to the data to produce a new kind of processed image. These are referred to as indexes.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram