杨楠, 南琳, 张丁一, 库涛. 基于深度学习的图像描述研究[J]. 红外与激光工程, 2018, 47(2): 203002-0203002(8). DOI: 10.3788/IRLA201847.0203002
引用本文: 杨楠, 南琳, 张丁一, 库涛. 基于深度学习的图像描述研究[J]. 红外与激光工程, 2018, 47(2): 203002-0203002(8). DOI: 10.3788/IRLA201847.0203002
Yang Nan, Nan Lin, Zhang Dingyi, Ku Tao. Research on image interpretation based on deep learning[J]. Infrared and Laser Engineering, 2018, 47(2): 203002-0203002(8). DOI: 10.3788/IRLA201847.0203002
Citation: Yang Nan, Nan Lin, Zhang Dingyi, Ku Tao. Research on image interpretation based on deep learning[J]. Infrared and Laser Engineering, 2018, 47(2): 203002-0203002(8). DOI: 10.3788/IRLA201847.0203002

基于深度学习的图像描述研究

Research on image interpretation based on deep learning

  • 摘要: 卷积神经网络(Convolution Neural Networks,CNN)和循环神经网络(Recurrent NeuralNetworks,RNN)在图像分类、计算机视觉、自然语言处理、语音识别、机器翻译、语义分析等领域取得了迅速的发展,引起了研究者对计算机自动生成图像描述的广泛关注。目前图像描述存在的主要问题有输入文本数据稀疏、模型存在过拟合、模型损失函数震荡难以收敛等问题。文中使用NIC作为基线模型,针对数据稀疏问题,改变了基线模型中的文本one-hot表示,使用word2vec对文本进行映射,为了防止过拟合,在模型中加入了正则项和使用Dropout技术,并在词序记忆方面取得创新,引入联想记忆单元GRU,用于文本生成。在试验中使用AdamOptimizer优化器进行参数迭代更新。实验结果表明:改进后的模型参数减少且收敛速度大幅加快,损失函数曲线更加平滑,损失最大降至2.91,模型的准确率比NIC提高了接近15%。实验有效地验证了在模型当中使用word2vec对文本进行映射可明显缓解数据稀疏问题,加入正则项和使用Dropout技术可有效防止模型过拟合,引入联想记忆单元GRU能够大幅减少模型训练参数,加快算法收敛速度,进而提高整个模型的准确率。

     

    Abstract: Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN) had developed rapidly in the fields of image classification, computer vision, natural language process, speech recognition, machine translation and semantic analysis, which caused researchers' close attention to computers' automatic generation of image interpretation. At present, the main problems in image description were sparse input text data, over-fitting of the model, difficult convergence of the model loss function, and so on. In this paper, NIC was used as a baseline model. For data sparseness, one-hot text in the baseline model was changed and word2vec was used to map the text. To prevent over-fitting, regular items were added to the model and Dropout technology was used. In order to make innovations in word order memory, the associative memory unit GRU for text generation was used. In experiment, the AdamOptimizer optimizer was used to update parameters iteratively. The experimental results show that the improved model parameters are reduced and the convergence speed is significantly faster, the loss function curves are smoother, the maximum loss is reduced to 2.91, and the model accuracy rate increases by nearly 15% compared with the NIC. Experiments validate that the use of word2vec to map text in the model obviously alleviates the data sparseness problem. Adding regular items and using Dropout technology could effectively prevent over-fitting of the model. The introduction of associative memory unit GRU could greatly reduce the model trained parameters and speed up the algorithm of convergence rate, improve the accuracy of the entire model.

     

/

返回文章
返回