-
实验基于MSTAR数据集中三类目标的SAR图像开展,具体样本情况如表1所描述。其中,BMP2和T72均包括三个型号(不同型号采用Serial number (SN) 进行区分),BTR70仅有一个型号。测试样本采集于15°俯仰角,训练样本来自于17°俯仰角。训练样本的真实方位角可以从MSTAR图像的原始记录数据中读取,作为方位角估计精度计算的参考值。为了更好地体现提出方法的有效性,后续实验中将其与参考文献[4]中的最小外接矩形(Minimum enclosing rectangle,MER)法、参考文献[5]中的主导边界(Dominant boundary)法以及参考文献[11]中的基于稀疏表示(Sparse representation)的方法进行对比。
Type Depression angle/(°) Target BMP2 BTR70 T72 Training 17 233 (SN_1)232 (SN_2)233(SN_3) 233 (SN_1) 232 (SN_1)231 (SN_2)233 (SN_3) Test 15 195 (SN_1)196 (SN_2)19 6(SN_3) 196 (SN_1) 196 (SN_1)195 (SN_2)191 (SN_3) Table 1. Training and test samples of the three MSTAR targets
-
采用文中提出的算法对三类目标的测试样本进行方位角估计,通过对比方位角估计值与其真值得到各个测试样本的估计误差。表2统计了提出方法的方位角估计结果,其中认为估计角度与真值误差在±10°以内为正确估计,否则为错误估计。在此条件下,提出方法可以正确估计99%以上的测试样本中的目标方位角,这一结果充分证明了提出方法的高性能。表3进一步对提出方法的估计精度进行细化分析,按照误差区间统计分布情况,结果表明提出方法对测试样本的估计误差多数能够控制在5°以内,表明其具有很高的正确率和估计精度。更为细致的误差分布情况如图3所示,多数测试样本的方位角估计误差能够控制在2°范围内,进一步表明提出方法不仅能够取得高精度,还能保持很强的稳健性。
Target class Number of samples Number of errors Percentage of correct samples BMP2 (SN_1) 195 1 99.49% BMP2 (SN_2) 196 0 100% BMP2 (SN_3) 196 1 99.50% BTR70 (SN_1) 196 2 98.98% T72 (SN_1) 196 3 98.47% T72 (SN_2) 195 1 99.49% T72 (SN_3) 191 0 100% Table 2. Azimuth estimation results of the test samples of the three MSTAR targets by the proposed method
Number of samples <5° <10° Mean Variance BMP2 587 573 585 2.01 1.85 BTR70 196 192 194 2.05 1.84 T72 582 569 578 2.16 1.77 Total 1365 1334 1357 2.07 1.81 Table 3. Results of the proposed method at different estimation precisions
表4对比了提出方法与其他方位角估计方法的总体性能。在要求的不同方位角估计误差约束下,提出方法的性能显著优于基于几何形状特征的最小外接矩形法和主导边界法。同时,提出方法还可以有效克服上述两类方法中存在的180°模糊问题。对比稀疏表示方法,提出方法性能具有优势,特别是在方位角估计误差要求很小的情况下,这种优势体现得更为明显。由于块稀疏贝叶斯学习考虑了SAR图像固有的方位角敏感性,可以更为准确地定位测试样本的方位角区间。通过文中的区间方位角加权方法,有利于得到更为准确的估计结果。表5对比了各类方法的效率,即估计单个测试样本所需的时间。提出方法的时间消耗最小,证明了其高效性。传统的基于目标二值区域的方法需要首先进行较为繁琐的图像预处理和目标分割,因此需要更多的时间消耗。相比基于稀疏表示的方法,文中采用的块稀疏贝叶斯学习效率更高,从而估计算法的效率更高。这些实验结果充分证明了采用块稀疏贝叶斯学习求解稀疏表示系数以及联合方位角区间估计方位角的有效性和稳健性。
Figure 3. Numbers of correctly estimation samples by the proposed algorithm at different estimation precisions
Method type Threshold of error/(°) 2 4 6 8 10 Proposed 76% 88% 97% 99% 99% MER 13% 24% 39% 57% 68% Dominant boundary 55% 82% 93% 97% 99% Sparse representation 64% 80% 93% 98% 99% Table 4. Correct estimation percentages of different methods at different estimation precisions
Method type Average time consumption/ms Proposed 10.5 MER 45.2 Dominant boundary 40.2 Sparse representation 12.1 Table 5. Time consumption of different methods
Target azimuth estimation of synthetic aperture radar image based on block sparse Bayesian learning
doi: 10.3788/IRLA20210282
- Received Date: 2021-05-06
- Rev Recd Date: 2021-06-06
- Publish Date: 2022-05-06
-
Key words:
- synthetic aperture radar /
- azimuth estimation /
- block sparse Bayesian learning /
- linear weighting
Abstract: A target azimuth estimation algorithm of Synthetic Aperture Radar (SAR) images based on block sparse Bayesian learning was proposed. SAR images were highly sensitive to target azimuth, the SAR image with a special azimuth only highly correlate with those samples with approaching azimuths. The proposed method was developed based on the idea of sparse representation. First, all the training samples were sorted according to the azimuths to construct the global dictionary. Then, the sparse coefficients of test sample to be estimated over the global dictionary should be block sparse ones, that was the non-zero coefficients mainly accumulate in a local part on the global dictionary. The solved positions of the blocks effectively reflect the azimuthal information of the test sample. The block sparse Bayesian learning (BSBL) algorithm was employed to solve the block sparse coefficients and then the candidate blocks were chosen based on the minimum of the reconstruction errors. With the optimal block, the estimated azimuth was calculated by linearly fusing the azimuths of all the training samples in the block thus a robust estimation result could be achieved. The proposed method considered the azimuthal sensitivity of SAR images and comprehensively utilized the valid information in a local discretionary, so the instability of using a signal reference training sample could be avoided. Experiments were conducted on moving and stationary target acquisition and recognition (MSTAR) dataset to validate effectiveness of the proposed method while compared with several classical algorithms. The experimental results validate the superior performance of the proposed method.