Chapter 10. Onboard Software: Image Compression

The major functions of the flight software are command processing, instrument control, image processing and compression, status monitoring, and telemetry control. In this chapter we describe the image processing and compression options available for LASCO data.

The low spacecraft telemetry rate (4.2 Kbps) would result in a transmission time of about 60 minutes for a full 1024x1024, 2 bytes per pixel image. (We note that the individual pixel intensities from the camera analog-to-digital converters are actually represented by only 14 bits, leaving a factor of 4 headroom in the 16 bits which are reserved in the data format.) Thus, image compression is desirable. A number of image processing techniques have been included in the flight software, from simple square root through transform encoding. The types of processing and compression utilized will be determined by ground command and executed through stored sequences, with parameters stored in tables. After a camera image is stored in a 2 Mbyte image buffer, the appropriate algorithms are applied. Image columns that are known to be bad will be replaced by the average intensities of the adjacent, or nearest good, columns. The locations of the bad columns (for each CCD) will be stored in a bitmap located in RAM, which can be updated as necessary.

Two general classes of techniques are included in the flight software. The first class consists of image processing techniques which, although by themselves they do not directly compress the telemetry required to transmit an image, do transform the image into a format which is more suitable for true image compression. The second class are true image compression techniques which directly reduce the telemetry load. These consist of both geometrical techniques which reduce the total number of pixels composing an image, and coding techniques which reduce the number of bits necessary to transmit an image, which may already have had other techniques applied previously. Transform encoding yields the highest compression ratio, but is very computationally intensive.

Additional compression is obtained by transmitting only the pixels that are not obscured by the occulting disk or the aperture stop. For some situations, the microprocessor will compute intensities only along a radial spoke, thereby saving up to 89% of the telemetry for a full image. Time resolution can be traded against field coverage to further reduce the data download requirement.

10.1 Image Processing Techniques

These techniques prepare the image for true compression by transforming the original image into an new image whose intensity histogram distribution has more intensity values falling in fewer histogram bins than did the original intensity histogram distribution:

The image intensities are divided by 2. This can be repeated to obtain an even smaller range of intensities. This is equivalent to representing the intensities with one less bit for each division.

Square Root:
The square roots of the intensities are computed. This reduces the number of bits required to represent the maximum intensity by a factor of 2.

A current image is differenced from a second image taken earlier in time, and the differences are transmitted. In a series of images, the second image can be either a constant image, or the immediately previous image to the current image in the series. In either case, original images can be reconstructed from differenced images on the ground. The potential compression factor depends upon the extent of variation between the two images. These variations are due to photon noise, and to real coronal temporal evolution.

The sum of a sequence of scaled images is transmitted. Since the intensity of an individual pixel in an original camera image is represented by only 14 bits, there is potential headroom in the 16 bit output format to add together 4 original images, but for greater than 4 images it is possible to exceed the 16 bit limit and to wrap the output. The scale factor, often set to division by 2, avoids wrapping in the final image. In addition, individual scale factors can be negative to perform differencing. This technique replaces a series of images by one image, and so can be considered a true compression technique, with the compression factor the number of images in the sum. However, the individual images cannot be recovered on the ground.

10.3 Image Compression Techniques

The geometric data compression techniques for LASCO are:

The pixels that are beyond the field limit or that are occulted by the occulting disk are not transmitted. Depending upon the telescope, the compression factor is between 1.3 and 1.5.

Any subregion in the 1024x1024 CCD may be read out. This will be used to trade field coverage for time resolution. The only restriction is that the subregion must be a multiple of 32 pixels on a side, and must begin at a pixel location which is divisible by 32. The subregion may be rectangular.

The LEB can form pixel sums (binning) of any rectangular size n x m, where n and m are positive integers. This feature is also available on the CCD chip itself, but is then limited to the dynamic range (14 bits) of the analog-to-digital converter.

Radial Spoke:
An image is transformed into polar coordinates. Along 1 wide sectors (pie shaped pieces), the averages along the 512 perpendicular chords are computed. Each of the 360 sectors is replaced by the 512 average values along a spoke through the center of the sector. This alone would produce a compression factor of 1024x1024/(360x512) = 5.7. An annular ring is then specified between an inner radius and an outer radius, and transmitted. By throwing out the pixels beyond the field limit or occulted by the occulting disk ("Geometric"), the compression factor becomes 69, depending upon the telescope.

The coding compression techniques can be divided into two categories, lossless or lossy, depending upon whether the image can be reconstructed on the ground with no loss of information, or with some (small or negligible) loss. The LASCO options are:

The Rice algorithm is a lossless scheme that creates a unique code for each intensity value that occurs in the image intensity histogram distribution. This code has a variable number of bits, depending upon the frequency of that intensity value, and the most frequent intensity is coded with the least number of bits. The use of a unique code, with no subsection being the code for a more frequent intensity, eliminates the need for a marker between individual intensity codes. However, less frequent intensity values can have codes which are many more bits in length than the actual uncoded value. The Rice technique compares three different schemes for forming the unique code, and also a fourth uncompressed code. It then outputs the image using the most efficient coding scheme. It also outputs a code indicating the coding scheme selected. The analysis is done in blocks of 32x32 pixels. The compression factor is variable, but is often about 2.

ADCT (The Adaptive Discrete Cosine Transform):
This is a lossy scheme. It is one of the most efficient transforms at compressing the information content into the fewest number of bits. The adaptive feature examines the statistics of each image to determine the coefficients of the cosine transform matrix which represents the image intensities most efficiently in the least number of bits. Compression is achieved by then eliminating higher order coefficients. The compression factor is not fixed, but is selectable to be up to 100. Of course, the higher the compression factor, the higher the information losses; compression factors higher than about 15-20 generally introduce unacceptable errors. The ADCT is computationally intensive, but does not depend upon the degree of compression. The transform is performed on blocks of 32x32 pixels.

These compression schemes may be combined to achieve a higher compression. The average compression factor is expected to be about 10. Thus on average, the readout time should be about 6 minutes, which would allow around 200 images each day to be transmitted.

Figure 10-1 illustrates at top left an eclipse image, which is compressed by a factor of 10.5 and then reconstructed using three different compression techniques. The top right image uses the ADCT technique described above, while the bottom two images use techniques (Adaptive Hadamard Technique and Block Truncation Coding) which were finally not implemented for the LASCO software. The image quality losses for all three techniques are minimal, as seen by the small values of the normalized mean square errors of the three reconstructed images from the original.

Return to Title Page or Go to Next Chapter