| 摘 要: |
Accurate and timely landslide mapping plays a critical role in emergency response and long-term land use planning. Deep learning-based methods represented by convolutional neural networks have been widely exploited in automatic landslide detection for their outstanding capability of feature representation and end-to-end learning mode. Most of the recent deep learning-based studies used toll-access high-resolution imagery for landslide detection. Considering demands for the future large-scale landslide mapping, this study aims to develop a new deep learning-based method to detect landslides using medium-resolution imagery and digital elevation model (DEM) data which are free-access and covered globally. Firstly, a workflow for constructing the landslide dataset is developed. Then, we design a semantic segmentation model to learn deep features and generate per-pixel landslide predictions. Specifically, the proposed network has a dual-encoder architecture with feature fusion to hierarchically represent deep features from the optical bands and DEM data. We also employ a self-attention module in the decoder of the proposed network to improve the performance. Experiments on two regions demonstrate that our method achieves the best F1 score of 79.24%, outperforming SegNet, U-Net, and Attention U-Net, the models popularly used in the semantic segmentation-based landslide detection. The proposed method may have an application potential in disaster risk assessment and post-disaster reconstruction and provide a technical reference for the large-scale landslide mapping in the future. |