ABSTRACT

The multi-source digital elevation models (DEMs) generated using images acquired during Chang’e-3’s descent and landing phases and after landing contain supplementary information that allows a higher-quality DEM to be produced by fusing multi-scale DEMs. The proposed fusion method consists of three steps. First, source DEMs are split into small DEM patches, which are classified into a few groups by local density peak clustering. Next, the grouped DEM patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined to form a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit algorithm is used to achieve sparse representation. We use real DEMs generated from Chang’e-3 descent images and navigation camera stereo images to validate the proposed method. Through our experiments, we reconstruct a seamless DEM with the highest resolution and the broadest spatial coverage of all of the input data. The experimental results demonstrate the feasibility of the proposed method.