摘 要: |
Accurate estimation of above-ground biomass (AGB) plays a significant role in characterizing crop growth status. In precision agriculture area, a widely-used method for measuring AGB is to develop regression relationships between AGB and agronomic traits extracted from multi-source remotely sensed images based on unmanned aerial vehicle (UAV) systems. However, such approach requires expert knowledges and causes the information loss of raw images. The objectives of this study are to (i) determine how multi-source images contribute to AGB estimation in single and whole growth stages; (ii) evaluate the robustness and adaptability of deep convolutional neural networks (DCNN) and other machine learning algorithms regarding AGB estimation. To establish multi-source image datasets, this study collected UAV red-green-blue (RGB), multispectral (MS) images and constructed the raster data for crop surface models (CSMs). Agronomic features were derived from the above-mentioned images and interpreted by the multiple linear regression, random forest, and support vector machine models. Then, a DCNN model was developed via an image-fusion architecture. Results show that the DCNN model provides the best estimation of maize AGB when a single type of image is considered, while the performance of DCNN degrades when sufficient agronomic features are used. Besides, the information of above three image datasets changes with various growth stages. The structure information derived from CSM images are more valuable than spectrum information derived from RGB and MS images in the vegetative stage, but less useful in the reproductive stage. Finally, a data fusion strategy was proposed according to the onboard sensors (or cost). |