The standard approach for the conversion of colour images into greyscale consists of removing their chromatic content, leaving the brightness information untouched. While this delivers acceptable greyscale outputs, the removal of colour causes loss of detail in the presence of equiluminant edges. This problem is even more pronounced when trying to display multispectral data (e.g., remote sensing data captured by satellite or aerial images), which often comprise information from both visible and non-visible light (e.g., infra-red).
In this presentation I will propose a method to convert colour images to greyscale based on the colour tensor gradient, and explain how this notion readily generalises to multispectral inputs. The method yields a greyscale image that preserves details, but often suffers from visible artefacts. I will show that these artefacts are closely linked to edges with high chromatic content, and that it is therefore possible to reduce artefacts by reducing the colour content of the input image. While this operation tends to reduce the contrast of the output image, the initial contrast can be recovered by means of retinex-like algorithms.