![]() ![]() In: Proceedings of the International Conference on Learning Representations (ICLR’15), pp 1150-1210 Google Scholar Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. ![]() In Advances in neural information processing systems (NIPS’12), ACM, pp 1097-1105 Google Scholar Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. ![]() In: Proceedings of the International Conference on Learning Representations (ICLR’14) Google Scholar Kümmerer M, Theis L, Bethge M (2014) Deep gaze i: boosting saliency prediction with feature maps trained on imagenet. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV’15), IEEE, pp 262-270 Google Scholar Huang X, Shen C, Boix X, Zhao Q (2015) SALICON: reducing the semantic gap in saliency prediction by adapting deep neural networks. In: Proceedings of the European Conference on Computer Vision (ECCV’18), Springer, pp 287-302 Google Scholar In: Proceedings of the European conference on computer vision (ECCV’14), Springer, pp 33-46 Google Scholar In: Proceedings of the 2009 IEEE 12th international conference on computer vision, ACM, pp 2106-2113 Google Scholar Judd T, Ehinger K, Durand F, and Torralba A (2009) Learning to predict where humans look. ![]() Itti L Koch C Computational modelling of visual attention Nat Rev Neurosci 2001 2 3 194 203 10.1038/35058500 Google Scholar Cross Ref Itti L Koch C A saliency-based search mechanism for overt and covert shifts of visual attention Vis Res 2000 40 10-12 1489 1506 10.1016/S0042-6989(99)00163-7 Google Scholar Cross Ref In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11), ACM, pp 2197-2206 Google Scholar Kumar R, Talton JO, Ahmad S, Klemmer SR (2011) Bricolage: example-based retargeting for web design. We also applied our model to help webpage designers evaluate and revise their visual designs, and the experimental results showed that the revised design obtained improved ratings by 8.2% compared to the initial design. Also, our model outperformed the existing visual saliency models. The evaluation results showed that after trained by crowdsourcing gaze data, the model performed better, such as prediction accuracy increased by 44.8%. On this basis, we collected a webpage dataset of crowdsourcing gaze data and constructed a visual saliency model based on a fully convolutional neural network (FCN). Parameter optimization on our crowdsourcing method was explored, and it came out that the accuracy of gaze data reached 1° of visual angle, which was 3.6% higher than other existed crowdsourcing methods. Therefore, this paper proposed a visual saliency model based on crowdsourcing eye tracking data, which was collected by gaze recall with self-reporting from crowd workers. However, the traditional eye tracking method is limited by high equipment and time cost, complex operation process, low user experience, etc. The visual saliency models based on low-level features of an image have the problem of low accuracy and scalability, while the visual saliency models based on deep neural networks can effectively improve the prediction performance, but require a large amount of training data, e.g., eye tracking data, to achieve good results. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |