事件融合与空间注意力和时间记忆力的视频去雨网络
作者:
作者单位:

作者简介:

通讯作者:

任文琦,E-mail:renwq3@mail.sysu.edu.cn

中图分类号:

基金项目:

国家自然科学基金(62172409);深圳市科技计划(JCYJ20220530145209022)


Event-fusion-based Spatial Attentive and Temporal Memorable Network for Video Deraining
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    近年来数码视频拍摄设备不断升级, 其感光元件宽容度、快门速率的提升虽然极大程度地丰富了可拍摄景物的多样性, 雨痕这类由于雨滴高速穿过景深范围的退化元素也更容易被记录到, 作为前景的稠密雨痕阻挡了背景景物的有效信息, 从而影响图像的有效采集. 由此视频图像去雨成为一个亟待解决的问题, 以往的视频去雨方法集中在利用常规图像自身的信息, 但是由于常规相机的感光元件物理极限、快门机制约束等原因, 许多光学信息在采集时丢失, 影响后续的视频去雨效果. 由此, 利用事件数据与常规视频信息的互补性, 借助事件信息的高动态范围、时间分辨率高等优势, 提出基于事件数据融合与空间注意力和时间记忆力的视频去雨网络, 利用三维对齐将稀疏事件流转化为与图像大小匹配的表达形式, 叠加输入至集合了空间注意力机制的事件-图像融合处理模块, 有效提取图像的空间信息, 并在连续帧处理时使用跨帧记忆力模块将先前帧特征利用, 最后经过三维卷积与两个损失函数的约束. 在开源视频去雨数据集上验证所提方法的有效性, 同时达到了实时视频处理的标准.

    Abstract:

    In recent years, digital video shooting equipment has been continuously upgraded. Although the improvement of the latitude of its image sensor and shutter rate has greatly enriched the diversity of the scene that can be photographed, the degraded factors such as rain streaks caused by raindrops passing through the field of view at high speed are also easier to be recorded. The dense rain streaks in the foreground block the effective information of the background scene, thus affecting the effective acquisition of images. Therefore, video image deraining becomes an urgent problem to be solved. The previous video deraining methods focus on using the information of conventional images themselves. However, due to the physical limit of the image sensors of conventional cameras, the constraints of the shutter mechanism, etc., much optical information is lost during video acquisition, which affects the subsequent video deraining effect. Therefore, taking advantage of the complementarity of event data and conventional video information, as well as the high dynamic range and high temporal resolution of event information, this study proposes a video deraining network based on event data fusion, spatial attention, and temporal memory, which uses three-dimensional alignment to convert the sparse event stream into an expression form that matches the size of the image and superimposes the input to the event-image fusion module that integrates the spatial attention mechanism, so as to effectively extract the spatial information of the image. In addition, in continuous frame processing, the inter-frame memory module is used to utilize the previous frame features, which are finally constrained by the three-dimensional convolution and two loss functions. The video deraining method is effective on the publicly available dataset and meets the standard of real-time video processing.

    参考文献
    相似文献
    引证文献
引用本文

孙上荃,任文琦,操晓春.事件融合与空间注意力和时间记忆力的视频去雨网络.软件学报,2024,35(5):2220-2234

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-04-07
  • 最后修改日期:2023-06-08
  • 录用日期:
  • 在线发布日期: 2023-09-11
  • 出版日期: 2024-05-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号