融合眼动数据和文本信息的可视化阅读辅助
作者:
作者简介:

程时伟(1981-),男,湖北黄石人,博士,教授,博士生导师,CCF高级会员,主要研究领域为人机交互,普适计算,协同计算;胡屹凛(1994-),女,硕士生,主要研究领域为人机交互;孙煜杰(1992-),男,硕士,主要研究领域为人机交互.

通讯作者:

程时伟,E-mail:swc@zjut.edu.cn

基金项目:

国家自然科学基金(61772468);浙江省属高校基本科研业务费专项资金(RF-B2019001)


Eye Tracking Data and Text Visual Analytics for Reading Assistance
Author:
Fund Project:

National Natural Science Foundation of China (61772468); Fundamental Research Funds for the Provincial Universities of Zhejiang (RF-B2019001)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [28]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    研究阅读过程中的视觉注意行为特征,基于眼动数据和文本主题进行阅读行为特征和文档结构的分析,设计了眼动热区图、圆环图、节点链接图、词云等可视化形式.在此基础上开发了面向阅读辅助的可视分析原型系统,该系统记录专家用户(如老师)的眼动数据,然后通过可视化形式分享给新手用户(如学生).用户实验结果表明,实验组用户的阅读理解客观题和主观题得分平均值分别比对照组用户提高了31.8%和55.0%,阅读和答题总用时比对照组用户平均减少了9.7%,可以有效帮助读者提高阅读效率、快速抓住文章重点和更好地理解文章内容,具有一定的有效性和可行性.

    Abstract:

    This paper studied visual attention behavior characteristics during reading process. Several visualization such as eye movement heatmap, doughnut chart, node link graph, and word cloud were designed to extract eye movement data and text themes for analysis of reading behavior characteristics and document structure. A visual aid prototype system for reading assistance was developed to record the eye movement data of expert users (such as teachers), and the visualization can be shared to novice users (such as students). The user study results showed that the average scores of objective and subjective questions in the experimental group were increased by 31.8% and 55.0%, respectively, and the total reading and answering time was reduced by 9.7%. It can be seen that this system can effectively help readers improve reading efficiency as well as quickly grasp the focus of the article and better understand the content of the article, so that it has certain effectiveness and feasibility.

    参考文献
    [1] Biedert R, Buscher G, Schwarz S, Hees J, Dengel A. Text 2.0. In:Proc. of the CHI'10 Extended Abstracts on Human Factors in Computing Systems. ACM, 2010. 4003-4008.[doi:10.1145/1753846.1754093]
    [2] Xu SH, Jiang H, Lau F. User-oriented document summarization through vision-based eye-tracking. In:Proc. of the 14th Int'l Conf. on Intelligent User Interfaces. ACM, 2009. 7-16.[doi:10.1145/1502650.1502656]
    [3] Martínez-Gómez P, Aizawa A. Recognition of understanding level and language skill using measurements of reading behavior. In Proc. of the 19th Int'l Conf. on Intelligent User Interfaces. ACM, 2014. 95-104.[doi:10.1145/2557500.2557546]
    [4] Augereau O, Kunze K, Fujiyoshi H, Kise K. Estimation of english skill with a mobile eye tracker. In:Proc. of the 2016 ACM Int'l Joint Conf. on Pervasive and Ubiquitous Computing:Adjunct. ACM, 2016. 1777-1781.[doi:10.1145/2968219.2968275]
    [5] Cheng SW, Sun ZQ, Sun Lw, Yee, K, Dey A K. Gaze-based annotations for reading comprehension. In Proc. of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2015. 1569-1572.[doi:10.1145/2702123.2702271]
    [6] Cheng SW, Sun ZQ. An approach to eye tracking for mobile device based interaction. Journal of Computer-Aided Design & Computer Graphics, 2014,8:1354-1361(in Chinese with English abstract).
    [7] Inhoff AW, Briihl D, Schwartz J. Compound word effects differ in reading, on-line naming, and delayed naming tasks. Memory & Cognition, 1996,24(4):466-476.[doi:10.3758/BF03200935]
    [8] Martinez-Conde S, Macknik S L, Troncoso X G, Hubel D H. Microsaccades:A neurophysiological analysis. Trends in neurosciences, 2009,32(9):463-475.[doi:10.1016/j.tins.2009.05.006]
    [9] Cheng SW, Shi YW, Sun SQ. An approach to usability evaluation for mobile computing user interface based on eye-tracking. Acta Electronica Sinica, 2009,37(b04):146-150(in Chinese with English abstract).
    [10] Sun ZQ. Research on the human-computer interaction technique for multi-device based on eye tracking[MS. Thesis]. Hangzhou:Zhejiang University of Technology, 2016(in Chinese with English abstract).
    [11] Li H. Statistical Learning Method. Beijing:Tsinghua University Press, 2012(in Chinese).
    [12] Cheng SW, Sun YJ. Eye movement data visualization based annotation for reading teaching. Journal of Zhejiang University of Technology, 2017,45(6):610-614(in Chinese with English abstract).
    [13] Connor CE, Egeth HE, Yantis S. Visual attention:bottom-up versus top-down. Current biology, 2004,14(19):R850-R852.[doi:10.1016/j.cub.2004.09.041]
    [14] Browne MW. Cross-validation methods. Journal of mathematical psychology, 2000,44(1):108-132.[doi:10.1006/jmps.1999.1279]
    [15] Shi CY, Xu CJ, Yang XJ. Study of TFIDF algorithm. Journal of Computer Applications, 2009,29(6):167-170. (in Chinese with English abstract).
    [16] Singh S. Optical character recognition techniques:a survey. Journal of emerging Trends in Computing and information Sciences, 2013,4(6):545-550.
    [17] Spakov O, Miniotas D. Visualization of eye gaze data using heat maps. Elektronika ir elektrotechnika, 2007:55-58.
    [18] Eades P. A heuristic for graph drawing. Congressus numerantium,1984,42:149-160.
    [19] Kobourov SG, Wampler K. Non-Eeuclidean spring embedders. IEEE Trans. on Visualization and Computer Graphics, 2005,11(6):757-767.
    [20] Kamada T, Kawai S. A general framework for visualizing abstract objects and relations. ACM Trans. on Graphics (TOG), 1991,10(1):1-39.[doi:10.1145/99902.99903]
    [21] Viegas FB, Wattenberg M, Feinberg J. Participatory visualization with wordle. IEEE Trans. on Visualization and Computer Graphics, 2009,15(6):1137-1144.[doi:10.1109/TVCG.2009.171]
    附中文参考文献:
    [6] 程时伟,孙志强,陆煜华.面向多设备交互的眼动跟踪方法.计算机辅助设计与图形学学报,2016,28(7):1094-1104.
    [9] 程时伟,石元伍,孙守迁.移动计算用户界面可用性评估的眼动方法.电子学报,2009,37(b04):146-150.
    [10] 孙志强.基于眼动跟踪的多设备人机交互技术研究[硕士学位论文].杭州:浙江工业大学,2016.
    [11] 李航.统计学习方法.北京:清华大学出版社,2012.
    [12] 程时伟,孙煜杰.面向阅读教学的眼动数据可视化批注方法.浙江工业大学学报,2017,45(6):610-614.
    [15] 施聪莺,徐朝军,杨晓江.TFID算法研究综述.计算机应用,2009,29(B06):167-170.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

程时伟,胡屹凛,孙煜杰.融合眼动数据和文本信息的可视化阅读辅助.软件学报,2019,30(S2):35-47

复制
分享
文章指标
  • 点击次数:1343
  • 下载次数: 4464
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2019-07-15
  • 在线发布日期: 2020-01-02
文章二维码
您是第19927892位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号